00:00:00.001 Started by upstream project "autotest-per-patch" build number 127144 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.128 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.296 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.307 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.317 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:07.317 > git config core.sparsecheckout # timeout=10 00:00:07.328 > git read-tree -mu HEAD # timeout=10 00:00:07.345 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:07.370 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:07.371 > git rev-list --no-walk 571d49b51a09ef9417806101d0b05bbb896ef7c3 # timeout=10 00:00:07.471 [Pipeline] Start of Pipeline 00:00:07.484 [Pipeline] library 00:00:07.485 Loading library shm_lib@master 00:00:07.485 Library shm_lib@master is cached. Copying from home. 00:00:07.498 [Pipeline] node 00:00:07.505 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.507 [Pipeline] { 00:00:07.516 [Pipeline] catchError 00:00:07.517 [Pipeline] { 00:00:07.525 [Pipeline] wrap 00:00:07.533 [Pipeline] { 00:00:07.538 [Pipeline] stage 00:00:07.539 [Pipeline] { (Prologue) 00:00:07.711 [Pipeline] sh 00:00:07.989 + logger -p user.info -t JENKINS-CI 00:00:08.008 [Pipeline] echo 00:00:08.010 Node: GP2 00:00:08.018 [Pipeline] sh 00:00:08.321 [Pipeline] setCustomBuildProperty 00:00:08.333 [Pipeline] echo 00:00:08.334 Cleanup processes 00:00:08.338 [Pipeline] sh 00:00:08.616 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.616 1327212 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.629 [Pipeline] sh 00:00:08.909 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.909 ++ grep -v 'sudo pgrep' 00:00:08.909 ++ awk '{print $1}' 00:00:08.909 + sudo kill -9 00:00:08.909 + true 00:00:08.925 [Pipeline] cleanWs 00:00:08.933 [WS-CLEANUP] Deleting project workspace... 00:00:08.933 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.940 [WS-CLEANUP] done 00:00:08.944 [Pipeline] setCustomBuildProperty 00:00:08.958 [Pipeline] sh 00:00:09.242 + sudo git config --global --replace-all safe.directory '*' 00:00:09.329 [Pipeline] httpRequest 00:00:09.358 [Pipeline] echo 00:00:09.360 Sorcerer 10.211.164.101 is alive 00:00:09.369 [Pipeline] httpRequest 00:00:09.374 HttpMethod: GET 00:00:09.375 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:09.376 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:09.394 Response Code: HTTP/1.1 200 OK 00:00:09.395 Success: Status code 200 is in the accepted range: 200,404 00:00:09.395 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:11.256 [Pipeline] sh 00:00:11.542 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:11.556 [Pipeline] httpRequest 00:00:11.593 [Pipeline] echo 00:00:11.595 Sorcerer 10.211.164.101 is alive 00:00:11.602 [Pipeline] httpRequest 00:00:11.608 HttpMethod: GET 00:00:11.608 URL: http://10.211.164.101/packages/spdk_a4ac1b54960e9db85123483ffb448fb26244df01.tar.gz 00:00:11.609 Sending request to url: http://10.211.164.101/packages/spdk_a4ac1b54960e9db85123483ffb448fb26244df01.tar.gz 00:00:11.629 Response Code: HTTP/1.1 200 OK 00:00:11.630 Success: Status code 200 is in the accepted range: 200,404 00:00:11.631 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a4ac1b54960e9db85123483ffb448fb26244df01.tar.gz 00:00:59.379 [Pipeline] sh 00:00:59.659 + tar --no-same-owner -xf spdk_a4ac1b54960e9db85123483ffb448fb26244df01.tar.gz 00:01:02.957 [Pipeline] sh 00:01:03.243 + git -C spdk log --oneline -n5 00:01:03.243 a4ac1b549 raid: allow to skip rebuild when adding a base bdev 00:01:03.243 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:03.243 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:03.243 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:03.243 d005e023b raid: fix empty slot not updated in sb after resize 00:01:03.255 [Pipeline] } 00:01:03.267 [Pipeline] // stage 00:01:03.273 [Pipeline] stage 00:01:03.275 [Pipeline] { (Prepare) 00:01:03.289 [Pipeline] writeFile 00:01:03.303 [Pipeline] sh 00:01:03.580 + logger -p user.info -t JENKINS-CI 00:01:03.592 [Pipeline] sh 00:01:03.884 + logger -p user.info -t JENKINS-CI 00:01:03.894 [Pipeline] sh 00:01:04.172 + cat autorun-spdk.conf 00:01:04.172 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.172 SPDK_TEST_NVMF=1 00:01:04.172 SPDK_TEST_NVME_CLI=1 00:01:04.172 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.172 SPDK_TEST_NVMF_NICS=e810 00:01:04.172 SPDK_TEST_VFIOUSER=1 00:01:04.172 SPDK_RUN_UBSAN=1 00:01:04.172 NET_TYPE=phy 00:01:04.181 RUN_NIGHTLY=0 00:01:04.184 [Pipeline] readFile 00:01:04.209 [Pipeline] withEnv 00:01:04.211 [Pipeline] { 00:01:04.224 [Pipeline] sh 00:01:04.507 + set -ex 00:01:04.508 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.508 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.508 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.508 ++ SPDK_TEST_NVMF=1 00:01:04.508 ++ SPDK_TEST_NVME_CLI=1 00:01:04.508 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.508 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.508 ++ SPDK_TEST_VFIOUSER=1 00:01:04.508 ++ SPDK_RUN_UBSAN=1 00:01:04.508 ++ NET_TYPE=phy 00:01:04.508 ++ RUN_NIGHTLY=0 00:01:04.508 + case $SPDK_TEST_NVMF_NICS in 00:01:04.508 + DRIVERS=ice 00:01:04.508 + [[ tcp == \r\d\m\a ]] 00:01:04.508 + [[ -n ice ]] 00:01:04.508 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.508 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.508 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.508 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.508 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.508 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.508 + true 00:01:04.508 + for D in $DRIVERS 00:01:04.508 + sudo modprobe ice 00:01:04.508 + exit 0 00:01:04.518 [Pipeline] } 00:01:04.536 [Pipeline] // withEnv 00:01:04.541 [Pipeline] } 00:01:04.559 [Pipeline] // stage 00:01:04.570 [Pipeline] catchError 00:01:04.572 [Pipeline] { 00:01:04.588 [Pipeline] timeout 00:01:04.589 Timeout set to expire in 50 min 00:01:04.591 [Pipeline] { 00:01:04.608 [Pipeline] stage 00:01:04.610 [Pipeline] { (Tests) 00:01:04.627 [Pipeline] sh 00:01:04.913 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.913 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.913 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.913 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:04.913 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.913 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.913 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:04.913 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.913 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.913 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.913 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:04.913 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.913 + source /etc/os-release 00:01:04.913 ++ NAME='Fedora Linux' 00:01:04.913 ++ VERSION='38 (Cloud Edition)' 00:01:04.913 ++ ID=fedora 00:01:04.913 ++ VERSION_ID=38 00:01:04.913 ++ VERSION_CODENAME= 00:01:04.913 ++ PLATFORM_ID=platform:f38 00:01:04.913 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:04.913 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.913 ++ LOGO=fedora-logo-icon 00:01:04.913 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:04.913 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.913 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:04.913 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.913 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.913 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.913 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:04.913 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.913 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:04.913 ++ SUPPORT_END=2024-05-14 00:01:04.913 ++ VARIANT='Cloud Edition' 00:01:04.913 ++ VARIANT_ID=cloud 00:01:04.913 + uname -a 00:01:04.913 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:04.913 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:05.852 Hugepages 00:01:05.852 node hugesize free / total 00:01:05.852 node0 1048576kB 0 / 0 00:01:05.852 node0 2048kB 0 / 0 00:01:05.852 node1 1048576kB 0 / 0 00:01:05.852 node1 2048kB 0 / 0 00:01:05.852 00:01:05.852 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:05.852 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:01:05.852 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:01:05.852 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:01:05.852 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:05.852 + rm -f /tmp/spdk-ld-path 00:01:05.852 + source autorun-spdk.conf 00:01:05.852 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.852 ++ SPDK_TEST_NVMF=1 00:01:05.852 ++ SPDK_TEST_NVME_CLI=1 00:01:05.852 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.852 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.852 ++ SPDK_TEST_VFIOUSER=1 00:01:05.852 ++ SPDK_RUN_UBSAN=1 00:01:05.852 ++ NET_TYPE=phy 00:01:05.852 ++ RUN_NIGHTLY=0 00:01:05.852 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:05.852 + [[ -n '' ]] 00:01:05.852 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.852 + for M in /var/spdk/build-*-manifest.txt 00:01:05.852 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:05.852 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.852 + for M in /var/spdk/build-*-manifest.txt 00:01:05.852 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:05.852 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.852 ++ uname 00:01:05.852 + [[ Linux == \L\i\n\u\x ]] 00:01:05.852 + sudo dmesg -T 00:01:05.852 + sudo dmesg --clear 00:01:05.852 + dmesg_pid=1327779 00:01:05.852 + [[ Fedora Linux == FreeBSD ]] 00:01:05.852 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.852 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.852 + sudo dmesg -Tw 00:01:05.852 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:05.852 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:05.852 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:05.852 + [[ -x /usr/src/fio-static/fio ]] 00:01:05.852 + export FIO_BIN=/usr/src/fio-static/fio 00:01:05.852 + FIO_BIN=/usr/src/fio-static/fio 00:01:05.852 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:05.852 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:05.852 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:05.852 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.852 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.852 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:05.852 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.852 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.852 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.852 Test configuration: 00:01:05.852 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.852 SPDK_TEST_NVMF=1 00:01:05.852 SPDK_TEST_NVME_CLI=1 00:01:05.852 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.852 SPDK_TEST_NVMF_NICS=e810 00:01:05.852 SPDK_TEST_VFIOUSER=1 00:01:05.852 SPDK_RUN_UBSAN=1 00:01:05.852 NET_TYPE=phy 00:01:06.111 RUN_NIGHTLY=0 10:07:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.111 10:07:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.111 10:07:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.111 10:07:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.111 10:07:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.111 10:07:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.111 10:07:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.111 10:07:55 -- paths/export.sh@5 -- $ export PATH 00:01:06.111 10:07:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.111 10:07:55 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.111 10:07:55 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:06.111 10:07:55 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721894875.XXXXXX 00:01:06.111 10:07:55 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721894875.hQ5quQ 00:01:06.111 10:07:55 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:06.111 10:07:55 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:06.111 10:07:55 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.111 10:07:55 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.111 10:07:55 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.111 10:07:55 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:06.111 10:07:55 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:06.111 10:07:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.111 10:07:55 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.111 10:07:55 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:06.111 10:07:55 -- pm/common@17 -- $ local monitor 00:01:06.111 10:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.111 10:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.111 10:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.111 10:07:55 -- pm/common@21 -- $ date +%s 00:01:06.111 10:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.111 10:07:55 -- pm/common@21 -- $ date +%s 00:01:06.111 10:07:55 -- pm/common@25 -- $ sleep 1 00:01:06.111 10:07:55 -- pm/common@21 -- $ date +%s 00:01:06.111 10:07:55 -- pm/common@21 -- $ date +%s 00:01:06.111 10:07:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721894875 00:01:06.111 10:07:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721894875 00:01:06.111 10:07:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721894875 00:01:06.111 10:07:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721894875 00:01:06.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721894875_collect-vmstat.pm.log 00:01:06.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721894875_collect-cpu-load.pm.log 00:01:06.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721894875_collect-cpu-temp.pm.log 00:01:06.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721894875_collect-bmc-pm.bmc.pm.log 00:01:07.052 10:07:56 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:07.052 10:07:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.052 10:07:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.052 10:07:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.052 10:07:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.052 Thu Jul 25 08:07:56 AM UTC 2024 00:01:07.052 10:07:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.052 v24.09-pre-322-ga4ac1b549 00:01:07.052 10:07:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.052 10:07:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.052 10:07:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.052 10:07:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:07.052 10:07:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:07.052 10:07:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.052 ************************************ 00:01:07.052 START TEST ubsan 00:01:07.052 ************************************ 00:01:07.052 10:07:56 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:07.052 using ubsan 00:01:07.052 00:01:07.052 real 0m0.000s 00:01:07.052 user 0m0.000s 00:01:07.052 sys 0m0.000s 00:01:07.052 10:07:56 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:07.052 10:07:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.052 ************************************ 00:01:07.052 END TEST ubsan 00:01:07.052 ************************************ 00:01:07.052 10:07:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.052 10:07:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.052 10:07:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.052 10:07:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:07.311 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.311 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:07.570 Using 'verbs' RDMA provider 00:01:18.123 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.343 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.343 Creating mk/config.mk...done. 00:01:30.343 Creating mk/cc.flags.mk...done. 00:01:30.344 Type 'make' to build. 00:01:30.344 10:08:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:01:30.344 10:08:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:30.344 10:08:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.344 10:08:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.344 ************************************ 00:01:30.344 START TEST make 00:01:30.344 ************************************ 00:01:30.344 10:08:18 make -- common/autotest_common.sh@1125 -- $ make -j32 00:01:30.344 make[1]: Nothing to be done for 'all'. 00:01:30.608 The Meson build system 00:01:30.608 Version: 1.3.1 00:01:30.608 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:30.608 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.608 Build type: native build 00:01:30.608 Project name: libvfio-user 00:01:30.608 Project version: 0.0.1 00:01:30.608 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.608 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.608 Host machine cpu family: x86_64 00:01:30.608 Host machine cpu: x86_64 00:01:30.608 Run-time dependency threads found: YES 00:01:30.608 Library dl found: YES 00:01:30.608 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.608 Run-time dependency json-c found: YES 0.17 00:01:30.608 Run-time dependency cmocka found: YES 1.1.7 00:01:30.608 Program pytest-3 found: NO 00:01:30.608 Program flake8 found: NO 00:01:30.608 Program misspell-fixer found: NO 00:01:30.608 Program restructuredtext-lint found: NO 00:01:30.608 Program valgrind found: YES (/usr/bin/valgrind) 00:01:30.608 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.608 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.608 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.608 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.608 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:30.608 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:30.608 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.608 Build targets in project: 8 00:01:30.608 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:30.608 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:30.608 00:01:30.608 libvfio-user 0.0.1 00:01:30.608 00:01:30.608 User defined options 00:01:30.608 buildtype : debug 00:01:30.608 default_library: shared 00:01:30.608 libdir : /usr/local/lib 00:01:30.608 00:01:30.608 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.566 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.566 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:31.831 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.831 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.831 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.831 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.831 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.831 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.831 [8/37] Compiling C object samples/null.p/null.c.o 00:01:31.831 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.831 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.831 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.831 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.831 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.831 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.831 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.831 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.831 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.831 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.831 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.831 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.095 [21/37] Compiling C object samples/server.p/server.c.o 00:01:32.095 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:32.095 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:32.095 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:32.095 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:32.095 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:32.095 [27/37] Compiling C object samples/client.p/client.c.o 00:01:32.095 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:32.095 [29/37] Linking target samples/client 00:01:32.095 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.353 [31/37] Linking target test/unit_tests 00:01:32.353 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:32.353 [33/37] Linking target samples/lspci 00:01:32.353 [34/37] Linking target samples/null 00:01:32.353 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:32.353 [36/37] Linking target samples/gpio-pci-idio-16 00:01:32.353 [37/37] Linking target samples/server 00:01:32.353 INFO: autodetecting backend as ninja 00:01:32.353 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.353 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.303 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.303 ninja: no work to do. 00:01:39.942 The Meson build system 00:01:39.942 Version: 1.3.1 00:01:39.942 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.942 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.942 Build type: native build 00:01:39.942 Program cat found: YES (/usr/bin/cat) 00:01:39.942 Project name: DPDK 00:01:39.942 Project version: 24.03.0 00:01:39.942 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.942 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.942 Host machine cpu family: x86_64 00:01:39.942 Host machine cpu: x86_64 00:01:39.942 Message: ## Building in Developer Mode ## 00:01:39.942 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.942 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.942 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.942 Program python3 found: YES (/usr/bin/python3) 00:01:39.942 Program cat found: YES (/usr/bin/cat) 00:01:39.942 Compiler for C supports arguments -march=native: YES 00:01:39.942 Checking for size of "void *" : 8 00:01:39.942 Checking for size of "void *" : 8 (cached) 00:01:39.942 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:39.942 Library m found: YES 00:01:39.942 Library numa found: YES 00:01:39.942 Has header "numaif.h" : YES 00:01:39.942 Library fdt found: NO 00:01:39.942 Library execinfo found: NO 00:01:39.942 Has header "execinfo.h" : YES 00:01:39.942 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.942 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.942 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.942 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.942 Run-time dependency openssl found: YES 3.0.9 00:01:39.942 Run-time dependency libpcap found: YES 1.10.4 00:01:39.942 Has header "pcap.h" with dependency libpcap: YES 00:01:39.942 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.942 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.942 Compiler for C supports arguments -Wformat: YES 00:01:39.942 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.942 Compiler for C supports arguments -Wformat-security: NO 00:01:39.943 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.943 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.943 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.943 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.943 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.943 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.943 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.943 Compiler for C supports arguments -Wundef: YES 00:01:39.943 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.943 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.943 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.943 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.943 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.943 Program objdump found: YES (/usr/bin/objdump) 00:01:39.943 Compiler for C supports arguments -mavx512f: YES 00:01:39.943 Checking if "AVX512 checking" compiles: YES 00:01:39.943 Fetching value of define "__SSE4_2__" : 1 00:01:39.943 Fetching value of define "__AES__" : 1 00:01:39.943 Fetching value of define "__AVX__" : 1 00:01:39.943 Fetching value of define "__AVX2__" : (undefined) 00:01:39.943 Fetching value of define "__AVX512BW__" : (undefined) 00:01:39.943 Fetching value of define "__AVX512CD__" : (undefined) 00:01:39.943 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:39.943 Fetching value of define "__AVX512F__" : (undefined) 00:01:39.943 Fetching value of define "__AVX512VL__" : (undefined) 00:01:39.943 Fetching value of define "__PCLMUL__" : 1 00:01:39.943 Fetching value of define "__RDRND__" : (undefined) 00:01:39.943 Fetching value of define "__RDSEED__" : (undefined) 00:01:39.943 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.943 Fetching value of define "__znver1__" : (undefined) 00:01:39.943 Fetching value of define "__znver2__" : (undefined) 00:01:39.943 Fetching value of define "__znver3__" : (undefined) 00:01:39.943 Fetching value of define "__znver4__" : (undefined) 00:01:39.943 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.943 Message: lib/log: Defining dependency "log" 00:01:39.943 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.943 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.943 Checking for function "getentropy" : NO 00:01:39.943 Message: lib/eal: Defining dependency "eal" 00:01:39.943 Message: lib/ring: Defining dependency "ring" 00:01:39.943 Message: lib/rcu: Defining dependency "rcu" 00:01:39.943 Message: lib/mempool: Defining dependency "mempool" 00:01:39.943 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.943 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.943 Compiler for C supports arguments -mpclmul: YES 00:01:39.943 Compiler for C supports arguments -maes: YES 00:01:39.943 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.943 Compiler for C supports arguments -mavx512bw: YES 00:01:39.943 Compiler for C supports arguments -mavx512dq: YES 00:01:39.943 Compiler for C supports arguments -mavx512vl: YES 00:01:39.943 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.943 Compiler for C supports arguments -mavx2: YES 00:01:39.943 Compiler for C supports arguments -mavx: YES 00:01:39.943 Message: lib/net: Defining dependency "net" 00:01:39.943 Message: lib/meter: Defining dependency "meter" 00:01:39.943 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.943 Message: lib/pci: Defining dependency "pci" 00:01:39.943 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.943 Message: lib/hash: Defining dependency "hash" 00:01:39.943 Message: lib/timer: Defining dependency "timer" 00:01:39.943 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.943 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.943 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.943 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.943 Message: lib/power: Defining dependency "power" 00:01:39.943 Message: lib/reorder: Defining dependency "reorder" 00:01:39.943 Message: lib/security: Defining dependency "security" 00:01:39.943 Has header "linux/userfaultfd.h" : YES 00:01:39.943 Has header "linux/vduse.h" : YES 00:01:39.943 Message: lib/vhost: Defining dependency "vhost" 00:01:39.943 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.943 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.943 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.943 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.943 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.943 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.943 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.943 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.943 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.943 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.943 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.943 Configuring doxy-api-html.conf using configuration 00:01:39.943 Configuring doxy-api-man.conf using configuration 00:01:39.943 Program mandb found: YES (/usr/bin/mandb) 00:01:39.943 Program sphinx-build found: NO 00:01:39.943 Configuring rte_build_config.h using configuration 00:01:39.943 Message: 00:01:39.943 ================= 00:01:39.943 Applications Enabled 00:01:39.943 ================= 00:01:39.943 00:01:39.943 apps: 00:01:39.943 00:01:39.943 00:01:39.943 Message: 00:01:39.943 ================= 00:01:39.943 Libraries Enabled 00:01:39.943 ================= 00:01:39.943 00:01:39.943 libs: 00:01:39.943 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.943 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.943 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.943 00:01:39.943 Message: 00:01:39.943 =============== 00:01:39.943 Drivers Enabled 00:01:39.943 =============== 00:01:39.943 00:01:39.943 common: 00:01:39.943 00:01:39.943 bus: 00:01:39.943 pci, vdev, 00:01:39.943 mempool: 00:01:39.943 ring, 00:01:39.943 dma: 00:01:39.943 00:01:39.943 net: 00:01:39.943 00:01:39.943 crypto: 00:01:39.943 00:01:39.943 compress: 00:01:39.943 00:01:39.943 vdpa: 00:01:39.943 00:01:39.943 00:01:39.943 Message: 00:01:39.943 ================= 00:01:39.943 Content Skipped 00:01:39.943 ================= 00:01:39.943 00:01:39.943 apps: 00:01:39.943 dumpcap: explicitly disabled via build config 00:01:39.943 graph: explicitly disabled via build config 00:01:39.943 pdump: explicitly disabled via build config 00:01:39.943 proc-info: explicitly disabled via build config 00:01:39.943 test-acl: explicitly disabled via build config 00:01:39.943 test-bbdev: explicitly disabled via build config 00:01:39.943 test-cmdline: explicitly disabled via build config 00:01:39.943 test-compress-perf: explicitly disabled via build config 00:01:39.943 test-crypto-perf: explicitly disabled via build config 00:01:39.943 test-dma-perf: explicitly disabled via build config 00:01:39.943 test-eventdev: explicitly disabled via build config 00:01:39.943 test-fib: explicitly disabled via build config 00:01:39.943 test-flow-perf: explicitly disabled via build config 00:01:39.943 test-gpudev: explicitly disabled via build config 00:01:39.943 test-mldev: explicitly disabled via build config 00:01:39.943 test-pipeline: explicitly disabled via build config 00:01:39.943 test-pmd: explicitly disabled via build config 00:01:39.943 test-regex: explicitly disabled via build config 00:01:39.943 test-sad: explicitly disabled via build config 00:01:39.943 test-security-perf: explicitly disabled via build config 00:01:39.943 00:01:39.943 libs: 00:01:39.943 argparse: explicitly disabled via build config 00:01:39.943 metrics: explicitly disabled via build config 00:01:39.943 acl: explicitly disabled via build config 00:01:39.943 bbdev: explicitly disabled via build config 00:01:39.943 bitratestats: explicitly disabled via build config 00:01:39.943 bpf: explicitly disabled via build config 00:01:39.943 cfgfile: explicitly disabled via build config 00:01:39.943 distributor: explicitly disabled via build config 00:01:39.943 efd: explicitly disabled via build config 00:01:39.943 eventdev: explicitly disabled via build config 00:01:39.943 dispatcher: explicitly disabled via build config 00:01:39.943 gpudev: explicitly disabled via build config 00:01:39.943 gro: explicitly disabled via build config 00:01:39.943 gso: explicitly disabled via build config 00:01:39.943 ip_frag: explicitly disabled via build config 00:01:39.943 jobstats: explicitly disabled via build config 00:01:39.943 latencystats: explicitly disabled via build config 00:01:39.943 lpm: explicitly disabled via build config 00:01:39.943 member: explicitly disabled via build config 00:01:39.943 pcapng: explicitly disabled via build config 00:01:39.943 rawdev: explicitly disabled via build config 00:01:39.943 regexdev: explicitly disabled via build config 00:01:39.943 mldev: explicitly disabled via build config 00:01:39.943 rib: explicitly disabled via build config 00:01:39.943 sched: explicitly disabled via build config 00:01:39.944 stack: explicitly disabled via build config 00:01:39.944 ipsec: explicitly disabled via build config 00:01:39.944 pdcp: explicitly disabled via build config 00:01:39.944 fib: explicitly disabled via build config 00:01:39.944 port: explicitly disabled via build config 00:01:39.944 pdump: explicitly disabled via build config 00:01:39.944 table: explicitly disabled via build config 00:01:39.944 pipeline: explicitly disabled via build config 00:01:39.944 graph: explicitly disabled via build config 00:01:39.944 node: explicitly disabled via build config 00:01:39.944 00:01:39.944 drivers: 00:01:39.944 common/cpt: not in enabled drivers build config 00:01:39.944 common/dpaax: not in enabled drivers build config 00:01:39.944 common/iavf: not in enabled drivers build config 00:01:39.944 common/idpf: not in enabled drivers build config 00:01:39.944 common/ionic: not in enabled drivers build config 00:01:39.944 common/mvep: not in enabled drivers build config 00:01:39.944 common/octeontx: not in enabled drivers build config 00:01:39.944 bus/auxiliary: not in enabled drivers build config 00:01:39.944 bus/cdx: not in enabled drivers build config 00:01:39.944 bus/dpaa: not in enabled drivers build config 00:01:39.944 bus/fslmc: not in enabled drivers build config 00:01:39.944 bus/ifpga: not in enabled drivers build config 00:01:39.944 bus/platform: not in enabled drivers build config 00:01:39.944 bus/uacce: not in enabled drivers build config 00:01:39.944 bus/vmbus: not in enabled drivers build config 00:01:39.944 common/cnxk: not in enabled drivers build config 00:01:39.944 common/mlx5: not in enabled drivers build config 00:01:39.944 common/nfp: not in enabled drivers build config 00:01:39.944 common/nitrox: not in enabled drivers build config 00:01:39.944 common/qat: not in enabled drivers build config 00:01:39.944 common/sfc_efx: not in enabled drivers build config 00:01:39.944 mempool/bucket: not in enabled drivers build config 00:01:39.944 mempool/cnxk: not in enabled drivers build config 00:01:39.944 mempool/dpaa: not in enabled drivers build config 00:01:39.944 mempool/dpaa2: not in enabled drivers build config 00:01:39.944 mempool/octeontx: not in enabled drivers build config 00:01:39.944 mempool/stack: not in enabled drivers build config 00:01:39.944 dma/cnxk: not in enabled drivers build config 00:01:39.944 dma/dpaa: not in enabled drivers build config 00:01:39.944 dma/dpaa2: not in enabled drivers build config 00:01:39.944 dma/hisilicon: not in enabled drivers build config 00:01:39.944 dma/idxd: not in enabled drivers build config 00:01:39.944 dma/ioat: not in enabled drivers build config 00:01:39.944 dma/skeleton: not in enabled drivers build config 00:01:39.944 net/af_packet: not in enabled drivers build config 00:01:39.944 net/af_xdp: not in enabled drivers build config 00:01:39.944 net/ark: not in enabled drivers build config 00:01:39.944 net/atlantic: not in enabled drivers build config 00:01:39.944 net/avp: not in enabled drivers build config 00:01:39.944 net/axgbe: not in enabled drivers build config 00:01:39.944 net/bnx2x: not in enabled drivers build config 00:01:39.944 net/bnxt: not in enabled drivers build config 00:01:39.944 net/bonding: not in enabled drivers build config 00:01:39.944 net/cnxk: not in enabled drivers build config 00:01:39.944 net/cpfl: not in enabled drivers build config 00:01:39.944 net/cxgbe: not in enabled drivers build config 00:01:39.944 net/dpaa: not in enabled drivers build config 00:01:39.944 net/dpaa2: not in enabled drivers build config 00:01:39.944 net/e1000: not in enabled drivers build config 00:01:39.944 net/ena: not in enabled drivers build config 00:01:39.944 net/enetc: not in enabled drivers build config 00:01:39.944 net/enetfec: not in enabled drivers build config 00:01:39.944 net/enic: not in enabled drivers build config 00:01:39.944 net/failsafe: not in enabled drivers build config 00:01:39.944 net/fm10k: not in enabled drivers build config 00:01:39.944 net/gve: not in enabled drivers build config 00:01:39.944 net/hinic: not in enabled drivers build config 00:01:39.944 net/hns3: not in enabled drivers build config 00:01:39.944 net/i40e: not in enabled drivers build config 00:01:39.944 net/iavf: not in enabled drivers build config 00:01:39.944 net/ice: not in enabled drivers build config 00:01:39.944 net/idpf: not in enabled drivers build config 00:01:39.944 net/igc: not in enabled drivers build config 00:01:39.944 net/ionic: not in enabled drivers build config 00:01:39.944 net/ipn3ke: not in enabled drivers build config 00:01:39.944 net/ixgbe: not in enabled drivers build config 00:01:39.944 net/mana: not in enabled drivers build config 00:01:39.944 net/memif: not in enabled drivers build config 00:01:39.944 net/mlx4: not in enabled drivers build config 00:01:39.944 net/mlx5: not in enabled drivers build config 00:01:39.944 net/mvneta: not in enabled drivers build config 00:01:39.944 net/mvpp2: not in enabled drivers build config 00:01:39.944 net/netvsc: not in enabled drivers build config 00:01:39.944 net/nfb: not in enabled drivers build config 00:01:39.944 net/nfp: not in enabled drivers build config 00:01:39.944 net/ngbe: not in enabled drivers build config 00:01:39.944 net/null: not in enabled drivers build config 00:01:39.944 net/octeontx: not in enabled drivers build config 00:01:39.944 net/octeon_ep: not in enabled drivers build config 00:01:39.944 net/pcap: not in enabled drivers build config 00:01:39.944 net/pfe: not in enabled drivers build config 00:01:39.944 net/qede: not in enabled drivers build config 00:01:39.944 net/ring: not in enabled drivers build config 00:01:39.944 net/sfc: not in enabled drivers build config 00:01:39.944 net/softnic: not in enabled drivers build config 00:01:39.944 net/tap: not in enabled drivers build config 00:01:39.944 net/thunderx: not in enabled drivers build config 00:01:39.944 net/txgbe: not in enabled drivers build config 00:01:39.944 net/vdev_netvsc: not in enabled drivers build config 00:01:39.944 net/vhost: not in enabled drivers build config 00:01:39.944 net/virtio: not in enabled drivers build config 00:01:39.944 net/vmxnet3: not in enabled drivers build config 00:01:39.944 raw/*: missing internal dependency, "rawdev" 00:01:39.944 crypto/armv8: not in enabled drivers build config 00:01:39.944 crypto/bcmfs: not in enabled drivers build config 00:01:39.944 crypto/caam_jr: not in enabled drivers build config 00:01:39.944 crypto/ccp: not in enabled drivers build config 00:01:39.944 crypto/cnxk: not in enabled drivers build config 00:01:39.944 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.944 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.944 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.944 crypto/mlx5: not in enabled drivers build config 00:01:39.944 crypto/mvsam: not in enabled drivers build config 00:01:39.944 crypto/nitrox: not in enabled drivers build config 00:01:39.944 crypto/null: not in enabled drivers build config 00:01:39.944 crypto/octeontx: not in enabled drivers build config 00:01:39.944 crypto/openssl: not in enabled drivers build config 00:01:39.944 crypto/scheduler: not in enabled drivers build config 00:01:39.944 crypto/uadk: not in enabled drivers build config 00:01:39.944 crypto/virtio: not in enabled drivers build config 00:01:39.944 compress/isal: not in enabled drivers build config 00:01:39.944 compress/mlx5: not in enabled drivers build config 00:01:39.944 compress/nitrox: not in enabled drivers build config 00:01:39.944 compress/octeontx: not in enabled drivers build config 00:01:39.944 compress/zlib: not in enabled drivers build config 00:01:39.944 regex/*: missing internal dependency, "regexdev" 00:01:39.944 ml/*: missing internal dependency, "mldev" 00:01:39.944 vdpa/ifc: not in enabled drivers build config 00:01:39.944 vdpa/mlx5: not in enabled drivers build config 00:01:39.944 vdpa/nfp: not in enabled drivers build config 00:01:39.944 vdpa/sfc: not in enabled drivers build config 00:01:39.944 event/*: missing internal dependency, "eventdev" 00:01:39.944 baseband/*: missing internal dependency, "bbdev" 00:01:39.944 gpu/*: missing internal dependency, "gpudev" 00:01:39.944 00:01:39.944 00:01:39.944 Build targets in project: 85 00:01:39.944 00:01:39.944 DPDK 24.03.0 00:01:39.944 00:01:39.944 User defined options 00:01:39.944 buildtype : debug 00:01:39.944 default_library : shared 00:01:39.944 libdir : lib 00:01:39.944 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.944 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.944 c_link_args : 00:01:39.944 cpu_instruction_set: native 00:01:39.944 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:39.944 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:39.944 enable_docs : false 00:01:39.944 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.944 enable_kmods : false 00:01:39.944 max_lcores : 128 00:01:39.944 tests : false 00:01:39.944 00:01:39.944 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.944 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.206 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.206 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.206 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.206 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.206 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.206 [6/268] Linking static target lib/librte_kvargs.a 00:01:40.206 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.206 [8/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.206 [9/268] Linking static target lib/librte_log.a 00:01:40.206 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.206 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.206 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.206 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.465 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.727 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.991 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.991 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.991 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.991 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.991 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.991 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.991 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.991 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.991 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.251 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.251 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.251 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.251 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.251 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.251 [30/268] Linking static target lib/librte_telemetry.a 00:01:41.251 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.251 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.251 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.251 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.251 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.251 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.251 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.251 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.251 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.251 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.251 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.251 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.251 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.251 [44/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.514 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.514 [47/268] Linking target lib/librte_log.so.24.1 00:01:41.514 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.514 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.514 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.514 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.514 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.776 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.776 [54/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.036 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.036 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.036 [57/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.036 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.036 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.036 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.300 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.300 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.300 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.300 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.300 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.300 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.300 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.300 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.300 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.300 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.300 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.300 [72/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.300 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.300 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.300 [75/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.300 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.300 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.300 [78/268] Linking target lib/librte_telemetry.so.24.1 00:01:42.300 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.565 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.565 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.565 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.565 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.824 [84/268] Linking static target lib/librte_ring.a 00:01:42.824 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.824 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.824 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.824 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.824 [89/268] Linking static target lib/librte_rcu.a 00:01:42.824 [90/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.824 [91/268] Linking static target lib/librte_eal.a 00:01:42.824 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.088 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.088 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.088 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.088 [96/268] Linking static target lib/librte_mempool.a 00:01:43.088 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.088 [98/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.088 [99/268] Linking static target lib/librte_pci.a 00:01:43.088 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.348 [101/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.348 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.348 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.348 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.348 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.348 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.348 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.348 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.348 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.348 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.610 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.610 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:43.610 [113/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.610 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.610 [115/268] Linking static target lib/librte_meter.a 00:01:43.610 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.610 [117/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.610 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.610 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.610 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.611 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.611 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:43.611 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.611 [124/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.611 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:43.875 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.875 [127/268] Linking static target lib/librte_net.a 00:01:43.875 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.875 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:43.875 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:43.875 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:43.875 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:43.875 [133/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.137 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.137 [135/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.137 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.137 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.137 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.137 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.137 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.137 [141/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.137 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.398 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.399 [144/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.399 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.399 [146/268] Linking static target lib/librte_cmdline.a 00:01:44.663 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.663 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:44.663 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:44.663 [150/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.663 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.663 [152/268] Linking static target lib/librte_timer.a 00:01:44.923 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.923 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.923 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.923 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.923 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.923 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.923 [159/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.923 [160/268] Linking static target lib/librte_mbuf.a 00:01:44.923 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.184 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.184 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.184 [164/268] Linking static target lib/librte_dmadev.a 00:01:45.184 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.184 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.184 [167/268] Linking static target lib/librte_hash.a 00:01:45.184 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.451 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.451 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.451 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.451 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.451 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.451 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.451 [175/268] Linking static target lib/librte_compressdev.a 00:01:45.451 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.451 [177/268] Linking static target lib/librte_power.a 00:01:45.712 [178/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.712 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.712 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.712 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.972 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.972 [183/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.972 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.972 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.972 [186/268] Linking static target lib/librte_reorder.a 00:01:45.972 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.972 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.972 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.972 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.972 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.972 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.972 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.972 [194/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.972 [195/268] Linking static target lib/librte_security.a 00:01:45.972 [196/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.230 [197/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.230 [198/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.230 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.230 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.230 [201/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.230 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.230 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.230 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.230 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.488 [206/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.488 [207/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.488 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.489 [209/268] Linking static target drivers/librte_bus_vdev.a 00:01:46.489 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.489 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.489 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.489 [213/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.489 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:46.489 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.489 [216/268] Linking static target lib/librte_ethdev.a 00:01:46.489 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.489 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.747 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.747 [220/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.747 [221/268] Linking static target lib/librte_cryptodev.a 00:01:46.747 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.747 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.747 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.747 [225/268] Linking static target drivers/librte_mempool_ring.a 00:01:46.747 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.679 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.049 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.980 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.235 [230/268] Linking target lib/librte_eal.so.24.1 00:01:50.235 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.235 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:50.235 [233/268] Linking target lib/librte_pci.so.24.1 00:01:50.235 [234/268] Linking target lib/librte_meter.so.24.1 00:01:50.235 [235/268] Linking target lib/librte_ring.so.24.1 00:01:50.235 [236/268] Linking target lib/librte_timer.so.24.1 00:01:50.235 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:50.236 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:50.492 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:50.492 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:50.492 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:50.492 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:50.492 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:50.492 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:50.492 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:50.492 [246/268] Linking target lib/librte_rcu.so.24.1 00:01:50.748 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:50.748 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:50.748 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:50.749 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:50.749 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:50.749 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:50.749 [253/268] Linking target lib/librte_net.so.24.1 00:01:50.749 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:51.005 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:51.005 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.005 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.005 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:51.005 [259/268] Linking target lib/librte_hash.so.24.1 00:01:51.005 [260/268] Linking target lib/librte_security.so.24.1 00:01:51.005 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:51.261 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:51.261 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:51.261 [264/268] Linking target lib/librte_power.so.24.1 00:01:55.441 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:55.441 [266/268] Linking static target lib/librte_vhost.a 00:01:56.010 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.010 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:56.010 INFO: autodetecting backend as ninja 00:01:56.010 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:01:57.389 CC lib/ut_mock/mock.o 00:01:57.389 CC lib/ut/ut.o 00:01:57.389 CC lib/log/log.o 00:01:57.389 CC lib/log/log_flags.o 00:01:57.389 CC lib/log/log_deprecated.o 00:01:57.389 LIB libspdk_log.a 00:01:57.389 LIB libspdk_ut.a 00:01:57.389 LIB libspdk_ut_mock.a 00:01:57.389 SO libspdk_ut.so.2.0 00:01:57.389 SO libspdk_ut_mock.so.6.0 00:01:57.389 SO libspdk_log.so.7.0 00:01:57.389 SYMLINK libspdk_ut.so 00:01:57.389 SYMLINK libspdk_ut_mock.so 00:01:57.389 SYMLINK libspdk_log.so 00:01:57.648 CXX lib/trace_parser/trace.o 00:01:57.648 CC lib/util/base64.o 00:01:57.648 CC lib/util/bit_array.o 00:01:57.648 CC lib/dma/dma.o 00:01:57.648 CC lib/util/cpuset.o 00:01:57.648 CC lib/ioat/ioat.o 00:01:57.648 CC lib/util/crc16.o 00:01:57.648 CC lib/util/crc32.o 00:01:57.648 CC lib/util/crc32c.o 00:01:57.648 CC lib/util/crc32_ieee.o 00:01:57.648 CC lib/util/crc64.o 00:01:57.648 CC lib/util/dif.o 00:01:57.648 CC lib/util/fd.o 00:01:57.648 CC lib/util/fd_group.o 00:01:57.648 CC lib/util/file.o 00:01:57.648 CC lib/util/hexlify.o 00:01:57.648 CC lib/util/iov.o 00:01:57.648 CC lib/util/net.o 00:01:57.648 CC lib/util/math.o 00:01:57.648 CC lib/util/pipe.o 00:01:57.648 CC lib/util/strerror_tls.o 00:01:57.648 CC lib/util/string.o 00:01:57.648 CC lib/util/xor.o 00:01:57.648 CC lib/util/uuid.o 00:01:57.648 CC lib/util/zipf.o 00:01:57.648 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.648 CC lib/vfio_user/host/vfio_user.o 00:01:57.907 LIB libspdk_ioat.a 00:01:57.907 LIB libspdk_dma.a 00:01:57.907 SO libspdk_ioat.so.7.0 00:01:57.907 SO libspdk_dma.so.4.0 00:01:58.166 SYMLINK libspdk_ioat.so 00:01:58.166 LIB libspdk_vfio_user.a 00:01:58.166 SYMLINK libspdk_dma.so 00:01:58.166 SO libspdk_vfio_user.so.5.0 00:01:58.166 SYMLINK libspdk_vfio_user.so 00:01:58.166 LIB libspdk_util.a 00:01:58.166 SO libspdk_util.so.10.0 00:01:58.425 SYMLINK libspdk_util.so 00:01:58.683 CC lib/conf/conf.o 00:01:58.683 CC lib/rdma_utils/rdma_utils.o 00:01:58.683 CC lib/idxd/idxd.o 00:01:58.683 CC lib/idxd/idxd_user.o 00:01:58.683 CC lib/idxd/idxd_kernel.o 00:01:58.683 CC lib/vmd/vmd.o 00:01:58.683 CC lib/json/json_parse.o 00:01:58.683 CC lib/rdma_provider/common.o 00:01:58.683 CC lib/vmd/led.o 00:01:58.683 CC lib/json/json_util.o 00:01:58.683 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:58.683 CC lib/json/json_write.o 00:01:58.683 CC lib/env_dpdk/env.o 00:01:58.684 CC lib/env_dpdk/memory.o 00:01:58.684 CC lib/env_dpdk/pci.o 00:01:58.684 CC lib/env_dpdk/init.o 00:01:58.684 CC lib/env_dpdk/threads.o 00:01:58.684 CC lib/env_dpdk/pci_ioat.o 00:01:58.684 CC lib/env_dpdk/pci_virtio.o 00:01:58.684 CC lib/env_dpdk/pci_vmd.o 00:01:58.684 CC lib/env_dpdk/pci_idxd.o 00:01:58.684 CC lib/env_dpdk/pci_event.o 00:01:58.684 CC lib/env_dpdk/sigbus_handler.o 00:01:58.684 CC lib/env_dpdk/pci_dpdk.o 00:01:58.684 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:58.684 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:58.684 LIB libspdk_trace_parser.a 00:01:58.684 SO libspdk_trace_parser.so.5.0 00:01:58.942 SYMLINK libspdk_trace_parser.so 00:01:58.942 LIB libspdk_rdma_provider.a 00:01:58.942 SO libspdk_rdma_provider.so.6.0 00:01:58.942 LIB libspdk_conf.a 00:01:58.942 SO libspdk_conf.so.6.0 00:01:58.942 SYMLINK libspdk_rdma_provider.so 00:01:58.942 SYMLINK libspdk_conf.so 00:01:58.942 LIB libspdk_rdma_utils.a 00:01:58.942 LIB libspdk_json.a 00:01:58.942 SO libspdk_rdma_utils.so.1.0 00:01:59.201 SO libspdk_json.so.6.0 00:01:59.201 SYMLINK libspdk_rdma_utils.so 00:01:59.201 SYMLINK libspdk_json.so 00:01:59.201 LIB libspdk_idxd.a 00:01:59.201 SO libspdk_idxd.so.12.0 00:01:59.201 LIB libspdk_vmd.a 00:01:59.201 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.201 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.201 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.201 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.201 SO libspdk_vmd.so.6.0 00:01:59.459 SYMLINK libspdk_idxd.so 00:01:59.459 SYMLINK libspdk_vmd.so 00:01:59.459 LIB libspdk_jsonrpc.a 00:01:59.718 SO libspdk_jsonrpc.so.6.0 00:01:59.718 SYMLINK libspdk_jsonrpc.so 00:01:59.718 CC lib/rpc/rpc.o 00:01:59.977 LIB libspdk_rpc.a 00:01:59.977 SO libspdk_rpc.so.6.0 00:02:00.235 SYMLINK libspdk_rpc.so 00:02:00.235 CC lib/keyring/keyring.o 00:02:00.235 CC lib/keyring/keyring_rpc.o 00:02:00.235 CC lib/trace/trace.o 00:02:00.235 CC lib/notify/notify.o 00:02:00.235 CC lib/notify/notify_rpc.o 00:02:00.235 CC lib/trace/trace_flags.o 00:02:00.235 CC lib/trace/trace_rpc.o 00:02:00.493 LIB libspdk_notify.a 00:02:00.493 SO libspdk_notify.so.6.0 00:02:00.493 LIB libspdk_keyring.a 00:02:00.493 SYMLINK libspdk_notify.so 00:02:00.493 LIB libspdk_trace.a 00:02:00.493 SO libspdk_keyring.so.1.0 00:02:00.750 SO libspdk_trace.so.10.0 00:02:00.750 SYMLINK libspdk_keyring.so 00:02:00.751 SYMLINK libspdk_trace.so 00:02:00.751 CC lib/thread/thread.o 00:02:00.751 CC lib/thread/iobuf.o 00:02:00.751 CC lib/sock/sock.o 00:02:00.751 CC lib/sock/sock_rpc.o 00:02:01.009 LIB libspdk_env_dpdk.a 00:02:01.009 SO libspdk_env_dpdk.so.15.0 00:02:01.267 SYMLINK libspdk_env_dpdk.so 00:02:01.267 LIB libspdk_sock.a 00:02:01.267 SO libspdk_sock.so.10.0 00:02:01.526 SYMLINK libspdk_sock.so 00:02:01.526 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.526 CC lib/nvme/nvme_ctrlr.o 00:02:01.526 CC lib/nvme/nvme_fabric.o 00:02:01.526 CC lib/nvme/nvme_ns_cmd.o 00:02:01.526 CC lib/nvme/nvme_ns.o 00:02:01.526 CC lib/nvme/nvme_pcie_common.o 00:02:01.526 CC lib/nvme/nvme_pcie.o 00:02:01.526 CC lib/nvme/nvme_qpair.o 00:02:01.526 CC lib/nvme/nvme.o 00:02:01.526 CC lib/nvme/nvme_quirks.o 00:02:01.526 CC lib/nvme/nvme_transport.o 00:02:01.526 CC lib/nvme/nvme_discovery.o 00:02:01.526 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.526 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:01.526 CC lib/nvme/nvme_tcp.o 00:02:01.526 CC lib/nvme/nvme_opal.o 00:02:01.526 CC lib/nvme/nvme_io_msg.o 00:02:01.526 CC lib/nvme/nvme_poll_group.o 00:02:01.526 CC lib/nvme/nvme_zns.o 00:02:01.526 CC lib/nvme/nvme_stubs.o 00:02:01.526 CC lib/nvme/nvme_auth.o 00:02:01.526 CC lib/nvme/nvme_cuse.o 00:02:01.526 CC lib/nvme/nvme_vfio_user.o 00:02:01.526 CC lib/nvme/nvme_rdma.o 00:02:02.462 LIB libspdk_thread.a 00:02:02.721 SO libspdk_thread.so.10.1 00:02:02.721 SYMLINK libspdk_thread.so 00:02:02.721 CC lib/accel/accel.o 00:02:02.721 CC lib/accel/accel_sw.o 00:02:02.721 CC lib/accel/accel_rpc.o 00:02:02.721 CC lib/virtio/virtio.o 00:02:02.721 CC lib/virtio/virtio_vhost_user.o 00:02:02.721 CC lib/vfu_tgt/tgt_endpoint.o 00:02:02.721 CC lib/vfu_tgt/tgt_rpc.o 00:02:02.721 CC lib/virtio/virtio_vfio_user.o 00:02:02.721 CC lib/virtio/virtio_pci.o 00:02:02.983 CC lib/blob/request.o 00:02:02.983 CC lib/blob/blobstore.o 00:02:02.983 CC lib/blob/zeroes.o 00:02:02.983 CC lib/blob/blob_bs_dev.o 00:02:02.983 CC lib/init/json_config.o 00:02:02.983 CC lib/init/subsystem.o 00:02:02.983 CC lib/init/subsystem_rpc.o 00:02:02.983 CC lib/init/rpc.o 00:02:03.241 LIB libspdk_init.a 00:02:03.241 LIB libspdk_vfu_tgt.a 00:02:03.241 SO libspdk_vfu_tgt.so.3.0 00:02:03.241 LIB libspdk_virtio.a 00:02:03.241 SO libspdk_init.so.5.0 00:02:03.241 SO libspdk_virtio.so.7.0 00:02:03.241 SYMLINK libspdk_vfu_tgt.so 00:02:03.241 SYMLINK libspdk_init.so 00:02:03.499 SYMLINK libspdk_virtio.so 00:02:03.499 CC lib/event/app.o 00:02:03.499 CC lib/event/reactor.o 00:02:03.499 CC lib/event/log_rpc.o 00:02:03.499 CC lib/event/scheduler_static.o 00:02:03.499 CC lib/event/app_rpc.o 00:02:04.074 LIB libspdk_event.a 00:02:04.074 SO libspdk_event.so.14.0 00:02:04.074 SYMLINK libspdk_event.so 00:02:04.074 LIB libspdk_accel.a 00:02:04.074 SO libspdk_accel.so.16.0 00:02:04.074 SYMLINK libspdk_accel.so 00:02:04.353 LIB libspdk_nvme.a 00:02:04.353 CC lib/bdev/bdev.o 00:02:04.353 CC lib/bdev/bdev_rpc.o 00:02:04.353 CC lib/bdev/bdev_zone.o 00:02:04.353 CC lib/bdev/part.o 00:02:04.353 CC lib/bdev/scsi_nvme.o 00:02:04.353 SO libspdk_nvme.so.13.1 00:02:04.667 SYMLINK libspdk_nvme.so 00:02:06.041 LIB libspdk_blob.a 00:02:06.041 SO libspdk_blob.so.11.0 00:02:06.041 SYMLINK libspdk_blob.so 00:02:06.298 CC lib/blobfs/blobfs.o 00:02:06.298 CC lib/lvol/lvol.o 00:02:06.298 CC lib/blobfs/tree.o 00:02:06.864 LIB libspdk_bdev.a 00:02:06.864 SO libspdk_bdev.so.16.0 00:02:06.864 SYMLINK libspdk_bdev.so 00:02:07.132 LIB libspdk_blobfs.a 00:02:07.132 SO libspdk_blobfs.so.10.0 00:02:07.132 CC lib/nbd/nbd.o 00:02:07.132 CC lib/scsi/dev.o 00:02:07.132 CC lib/nbd/nbd_rpc.o 00:02:07.132 CC lib/ublk/ublk.o 00:02:07.132 CC lib/nvmf/ctrlr.o 00:02:07.132 CC lib/scsi/lun.o 00:02:07.132 CC lib/ublk/ublk_rpc.o 00:02:07.132 CC lib/nvmf/ctrlr_discovery.o 00:02:07.132 CC lib/scsi/port.o 00:02:07.132 CC lib/nvmf/ctrlr_bdev.o 00:02:07.132 CC lib/scsi/scsi.o 00:02:07.132 CC lib/scsi/scsi_bdev.o 00:02:07.132 CC lib/nvmf/nvmf.o 00:02:07.132 CC lib/nvmf/subsystem.o 00:02:07.132 CC lib/scsi/scsi_pr.o 00:02:07.132 CC lib/nvmf/nvmf_rpc.o 00:02:07.132 CC lib/scsi/scsi_rpc.o 00:02:07.132 CC lib/ftl/ftl_core.o 00:02:07.132 CC lib/scsi/task.o 00:02:07.132 CC lib/nvmf/transport.o 00:02:07.132 CC lib/nvmf/tcp.o 00:02:07.132 CC lib/ftl/ftl_init.o 00:02:07.132 CC lib/nvmf/stubs.o 00:02:07.132 CC lib/ftl/ftl_layout.o 00:02:07.132 CC lib/nvmf/mdns_server.o 00:02:07.132 CC lib/ftl/ftl_debug.o 00:02:07.132 CC lib/nvmf/vfio_user.o 00:02:07.132 CC lib/ftl/ftl_io.o 00:02:07.132 CC lib/nvmf/rdma.o 00:02:07.132 CC lib/ftl/ftl_sb.o 00:02:07.132 SYMLINK libspdk_blobfs.so 00:02:07.132 CC lib/ftl/ftl_l2p.o 00:02:07.132 LIB libspdk_lvol.a 00:02:07.391 SO libspdk_lvol.so.10.0 00:02:07.391 CC lib/nvmf/auth.o 00:02:07.391 CC lib/ftl/ftl_l2p_flat.o 00:02:07.391 CC lib/ftl/ftl_nv_cache.o 00:02:07.391 SYMLINK libspdk_lvol.so 00:02:07.391 CC lib/ftl/ftl_band.o 00:02:07.391 CC lib/ftl/ftl_band_ops.o 00:02:07.391 CC lib/ftl/ftl_writer.o 00:02:07.391 CC lib/ftl/ftl_rq.o 00:02:07.391 CC lib/ftl/ftl_reloc.o 00:02:07.391 CC lib/ftl/ftl_l2p_cache.o 00:02:07.653 CC lib/ftl/ftl_p2l.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.653 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.912 LIB libspdk_nbd.a 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.912 SO libspdk_nbd.so.7.0 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.912 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.912 SYMLINK libspdk_nbd.so 00:02:07.912 LIB libspdk_scsi.a 00:02:07.912 CC lib/ftl/utils/ftl_conf.o 00:02:07.912 CC lib/ftl/utils/ftl_md.o 00:02:07.912 SO libspdk_scsi.so.9.0 00:02:07.912 CC lib/ftl/utils/ftl_mempool.o 00:02:07.912 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.178 CC lib/ftl/utils/ftl_property.o 00:02:08.178 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.178 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.178 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.178 LIB libspdk_ublk.a 00:02:08.178 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.178 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.178 SO libspdk_ublk.so.3.0 00:02:08.178 SYMLINK libspdk_scsi.so 00:02:08.178 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.178 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.178 SYMLINK libspdk_ublk.so 00:02:08.439 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.439 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.439 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.439 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.439 CC lib/ftl/base/ftl_base_dev.o 00:02:08.439 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.439 CC lib/ftl/ftl_trace.o 00:02:08.439 CC lib/iscsi/conn.o 00:02:08.439 CC lib/iscsi/init_grp.o 00:02:08.439 CC lib/iscsi/iscsi.o 00:02:08.439 CC lib/iscsi/md5.o 00:02:08.439 CC lib/iscsi/param.o 00:02:08.439 CC lib/vhost/vhost.o 00:02:08.439 CC lib/iscsi/portal_grp.o 00:02:08.439 CC lib/iscsi/tgt_node.o 00:02:08.439 CC lib/vhost/vhost_rpc.o 00:02:08.439 CC lib/iscsi/iscsi_subsystem.o 00:02:08.701 CC lib/vhost/vhost_scsi.o 00:02:08.701 CC lib/vhost/vhost_blk.o 00:02:08.701 CC lib/iscsi/iscsi_rpc.o 00:02:08.701 CC lib/vhost/rte_vhost_user.o 00:02:08.701 CC lib/iscsi/task.o 00:02:09.268 LIB libspdk_ftl.a 00:02:09.268 SO libspdk_ftl.so.9.0 00:02:09.526 SYMLINK libspdk_ftl.so 00:02:10.092 LIB libspdk_vhost.a 00:02:10.092 LIB libspdk_nvmf.a 00:02:10.092 LIB libspdk_iscsi.a 00:02:10.092 SO libspdk_vhost.so.8.0 00:02:10.092 SO libspdk_iscsi.so.8.0 00:02:10.092 SO libspdk_nvmf.so.19.0 00:02:10.093 SYMLINK libspdk_vhost.so 00:02:10.351 SYMLINK libspdk_iscsi.so 00:02:10.351 SYMLINK libspdk_nvmf.so 00:02:10.611 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.611 CC module/vfu_device/vfu_virtio.o 00:02:10.611 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.611 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.611 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.611 CC module/accel/error/accel_error.o 00:02:10.611 CC module/accel/error/accel_error_rpc.o 00:02:10.611 CC module/sock/posix/posix.o 00:02:10.611 CC module/keyring/linux/keyring.o 00:02:10.611 CC module/keyring/linux/keyring_rpc.o 00:02:10.611 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.611 CC module/accel/dsa/accel_dsa.o 00:02:10.611 CC module/keyring/file/keyring.o 00:02:10.611 CC module/keyring/file/keyring_rpc.o 00:02:10.611 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.611 CC module/accel/ioat/accel_ioat.o 00:02:10.611 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.611 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.611 CC module/blob/bdev/blob_bdev.o 00:02:10.611 CC module/accel/iaa/accel_iaa.o 00:02:10.611 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.611 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.869 LIB libspdk_env_dpdk_rpc.a 00:02:10.869 SO libspdk_env_dpdk_rpc.so.6.0 00:02:10.869 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.869 LIB libspdk_keyring_linux.a 00:02:10.869 LIB libspdk_keyring_file.a 00:02:10.869 LIB libspdk_scheduler_dynamic.a 00:02:10.869 SO libspdk_keyring_file.so.1.0 00:02:10.869 SO libspdk_keyring_linux.so.1.0 00:02:10.869 LIB libspdk_scheduler_gscheduler.a 00:02:10.869 SO libspdk_scheduler_dynamic.so.4.0 00:02:10.869 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.869 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.128 LIB libspdk_accel_error.a 00:02:11.128 LIB libspdk_accel_dsa.a 00:02:11.128 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.128 SYMLINK libspdk_keyring_file.so 00:02:11.128 LIB libspdk_accel_ioat.a 00:02:11.128 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.128 SO libspdk_accel_error.so.2.0 00:02:11.128 SO libspdk_accel_dsa.so.5.0 00:02:11.128 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.128 SYMLINK libspdk_keyring_linux.so 00:02:11.128 SO libspdk_accel_ioat.so.6.0 00:02:11.128 LIB libspdk_blob_bdev.a 00:02:11.128 LIB libspdk_accel_iaa.a 00:02:11.128 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.128 SO libspdk_blob_bdev.so.11.0 00:02:11.128 SYMLINK libspdk_accel_error.so 00:02:11.128 SO libspdk_accel_iaa.so.3.0 00:02:11.128 SYMLINK libspdk_accel_dsa.so 00:02:11.128 SYMLINK libspdk_accel_ioat.so 00:02:11.128 SYMLINK libspdk_blob_bdev.so 00:02:11.128 SYMLINK libspdk_accel_iaa.so 00:02:11.394 CC module/bdev/raid/bdev_raid.o 00:02:11.394 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.394 CC module/bdev/gpt/gpt.o 00:02:11.394 CC module/bdev/error/vbdev_error.o 00:02:11.394 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.394 CC module/bdev/aio/bdev_aio.o 00:02:11.394 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.394 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.394 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.394 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.394 CC module/bdev/raid/raid1.o 00:02:11.394 CC module/bdev/raid/raid0.o 00:02:11.394 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.394 CC module/bdev/raid/concat.o 00:02:11.394 CC module/bdev/delay/vbdev_delay.o 00:02:11.394 CC module/bdev/malloc/bdev_malloc.o 00:02:11.394 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.394 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.394 CC module/bdev/ftl/bdev_ftl.o 00:02:11.394 CC module/bdev/nvme/bdev_nvme.o 00:02:11.394 CC module/bdev/split/vbdev_split.o 00:02:11.394 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.394 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.394 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.394 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.394 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.394 CC module/bdev/null/bdev_null.o 00:02:11.394 LIB libspdk_vfu_device.a 00:02:11.394 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.394 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.394 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.394 SO libspdk_vfu_device.so.3.0 00:02:11.657 SYMLINK libspdk_vfu_device.so 00:02:11.657 CC module/bdev/nvme/nvme_rpc.o 00:02:11.657 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.657 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.657 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.657 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.657 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.657 CC module/bdev/nvme/vbdev_opal.o 00:02:11.917 CC module/bdev/null/bdev_null_rpc.o 00:02:11.917 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.917 LIB libspdk_sock_posix.a 00:02:11.917 SO libspdk_sock_posix.so.6.0 00:02:11.917 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.917 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.917 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.917 LIB libspdk_bdev_split.a 00:02:11.917 LIB libspdk_bdev_error.a 00:02:11.917 LIB libspdk_bdev_gpt.a 00:02:11.917 SO libspdk_bdev_split.so.6.0 00:02:11.917 LIB libspdk_bdev_ftl.a 00:02:11.917 SO libspdk_bdev_error.so.6.0 00:02:11.917 LIB libspdk_bdev_passthru.a 00:02:11.917 SO libspdk_bdev_gpt.so.6.0 00:02:11.917 SYMLINK libspdk_sock_posix.so 00:02:11.917 SO libspdk_bdev_ftl.so.6.0 00:02:11.917 SO libspdk_bdev_passthru.so.6.0 00:02:11.917 SYMLINK libspdk_bdev_split.so 00:02:11.917 LIB libspdk_bdev_aio.a 00:02:11.917 LIB libspdk_bdev_zone_block.a 00:02:12.175 SO libspdk_bdev_aio.so.6.0 00:02:12.175 LIB libspdk_bdev_malloc.a 00:02:12.175 SYMLINK libspdk_bdev_error.so 00:02:12.175 LIB libspdk_blobfs_bdev.a 00:02:12.175 SYMLINK libspdk_bdev_gpt.so 00:02:12.175 SO libspdk_bdev_zone_block.so.6.0 00:02:12.175 LIB libspdk_bdev_delay.a 00:02:12.175 LIB libspdk_bdev_iscsi.a 00:02:12.175 SO libspdk_bdev_malloc.so.6.0 00:02:12.175 LIB libspdk_bdev_null.a 00:02:12.175 SYMLINK libspdk_bdev_ftl.so 00:02:12.175 SO libspdk_blobfs_bdev.so.6.0 00:02:12.175 SYMLINK libspdk_bdev_passthru.so 00:02:12.175 SO libspdk_bdev_delay.so.6.0 00:02:12.175 SO libspdk_bdev_null.so.6.0 00:02:12.175 SO libspdk_bdev_iscsi.so.6.0 00:02:12.175 SYMLINK libspdk_bdev_aio.so 00:02:12.175 SYMLINK libspdk_bdev_zone_block.so 00:02:12.175 SYMLINK libspdk_blobfs_bdev.so 00:02:12.175 SYMLINK libspdk_bdev_malloc.so 00:02:12.175 SYMLINK libspdk_bdev_delay.so 00:02:12.175 SYMLINK libspdk_bdev_null.so 00:02:12.175 SYMLINK libspdk_bdev_iscsi.so 00:02:12.175 LIB libspdk_bdev_virtio.a 00:02:12.175 SO libspdk_bdev_virtio.so.6.0 00:02:12.175 LIB libspdk_bdev_lvol.a 00:02:12.432 SO libspdk_bdev_lvol.so.6.0 00:02:12.432 SYMLINK libspdk_bdev_virtio.so 00:02:12.432 SYMLINK libspdk_bdev_lvol.so 00:02:12.690 LIB libspdk_bdev_raid.a 00:02:12.690 SO libspdk_bdev_raid.so.6.0 00:02:12.948 SYMLINK libspdk_bdev_raid.so 00:02:13.881 LIB libspdk_bdev_nvme.a 00:02:13.881 SO libspdk_bdev_nvme.so.7.0 00:02:14.139 SYMLINK libspdk_bdev_nvme.so 00:02:14.397 CC module/event/subsystems/sock/sock.o 00:02:14.397 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.397 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.397 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.398 CC module/event/subsystems/vmd/vmd.o 00:02:14.398 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.398 CC module/event/subsystems/keyring/keyring.o 00:02:14.398 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.398 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.657 LIB libspdk_event_keyring.a 00:02:14.657 LIB libspdk_event_vhost_blk.a 00:02:14.657 LIB libspdk_event_vfu_tgt.a 00:02:14.657 LIB libspdk_event_scheduler.a 00:02:14.657 LIB libspdk_event_sock.a 00:02:14.657 LIB libspdk_event_vmd.a 00:02:14.657 SO libspdk_event_keyring.so.1.0 00:02:14.657 LIB libspdk_event_iobuf.a 00:02:14.657 SO libspdk_event_vhost_blk.so.3.0 00:02:14.657 SO libspdk_event_vfu_tgt.so.3.0 00:02:14.657 SO libspdk_event_scheduler.so.4.0 00:02:14.657 SO libspdk_event_sock.so.5.0 00:02:14.657 SO libspdk_event_vmd.so.6.0 00:02:14.657 SO libspdk_event_iobuf.so.3.0 00:02:14.657 SYMLINK libspdk_event_keyring.so 00:02:14.657 SYMLINK libspdk_event_vhost_blk.so 00:02:14.657 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.657 SYMLINK libspdk_event_scheduler.so 00:02:14.657 SYMLINK libspdk_event_sock.so 00:02:14.657 SYMLINK libspdk_event_vmd.so 00:02:14.657 SYMLINK libspdk_event_iobuf.so 00:02:14.916 CC module/event/subsystems/accel/accel.o 00:02:15.176 LIB libspdk_event_accel.a 00:02:15.176 SO libspdk_event_accel.so.6.0 00:02:15.176 SYMLINK libspdk_event_accel.so 00:02:15.434 CC module/event/subsystems/bdev/bdev.o 00:02:15.434 LIB libspdk_event_bdev.a 00:02:15.692 SO libspdk_event_bdev.so.6.0 00:02:15.692 SYMLINK libspdk_event_bdev.so 00:02:15.692 CC module/event/subsystems/scsi/scsi.o 00:02:15.692 CC module/event/subsystems/ublk/ublk.o 00:02:15.692 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.692 CC module/event/subsystems/nbd/nbd.o 00:02:15.692 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.951 LIB libspdk_event_nbd.a 00:02:15.951 LIB libspdk_event_ublk.a 00:02:15.951 LIB libspdk_event_scsi.a 00:02:15.951 SO libspdk_event_ublk.so.3.0 00:02:15.951 SO libspdk_event_nbd.so.6.0 00:02:15.951 SO libspdk_event_scsi.so.6.0 00:02:15.951 SYMLINK libspdk_event_ublk.so 00:02:15.951 SYMLINK libspdk_event_nbd.so 00:02:15.951 SYMLINK libspdk_event_scsi.so 00:02:16.210 LIB libspdk_event_nvmf.a 00:02:16.210 SO libspdk_event_nvmf.so.6.0 00:02:16.210 SYMLINK libspdk_event_nvmf.so 00:02:16.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.210 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.470 LIB libspdk_event_vhost_scsi.a 00:02:16.470 LIB libspdk_event_iscsi.a 00:02:16.470 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.470 SO libspdk_event_iscsi.so.6.0 00:02:16.470 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.470 SYMLINK libspdk_event_iscsi.so 00:02:16.729 SO libspdk.so.6.0 00:02:16.729 SYMLINK libspdk.so 00:02:17.000 CXX app/trace/trace.o 00:02:17.000 CC app/trace_record/trace_record.o 00:02:17.000 CC app/spdk_top/spdk_top.o 00:02:17.000 CC app/spdk_lspci/spdk_lspci.o 00:02:17.000 CC app/spdk_nvme_identify/identify.o 00:02:17.000 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.000 CC test/rpc_client/rpc_client_test.o 00:02:17.000 TEST_HEADER include/spdk/accel.h 00:02:17.000 CC app/spdk_nvme_perf/perf.o 00:02:17.000 TEST_HEADER include/spdk/accel_module.h 00:02:17.000 TEST_HEADER include/spdk/assert.h 00:02:17.000 TEST_HEADER include/spdk/barrier.h 00:02:17.000 TEST_HEADER include/spdk/base64.h 00:02:17.000 TEST_HEADER include/spdk/bdev.h 00:02:17.000 TEST_HEADER include/spdk/bdev_module.h 00:02:17.000 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.000 TEST_HEADER include/spdk/bit_array.h 00:02:17.000 TEST_HEADER include/spdk/bit_pool.h 00:02:17.000 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.000 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.000 TEST_HEADER include/spdk/blobfs.h 00:02:17.000 TEST_HEADER include/spdk/blob.h 00:02:17.000 TEST_HEADER include/spdk/conf.h 00:02:17.000 TEST_HEADER include/spdk/config.h 00:02:17.000 TEST_HEADER include/spdk/cpuset.h 00:02:17.000 TEST_HEADER include/spdk/crc16.h 00:02:17.000 TEST_HEADER include/spdk/crc32.h 00:02:17.000 TEST_HEADER include/spdk/crc64.h 00:02:17.000 TEST_HEADER include/spdk/dif.h 00:02:17.000 TEST_HEADER include/spdk/dma.h 00:02:17.000 TEST_HEADER include/spdk/endian.h 00:02:17.000 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.000 TEST_HEADER include/spdk/env.h 00:02:17.000 TEST_HEADER include/spdk/event.h 00:02:17.000 TEST_HEADER include/spdk/fd_group.h 00:02:17.000 TEST_HEADER include/spdk/fd.h 00:02:17.000 TEST_HEADER include/spdk/file.h 00:02:17.000 TEST_HEADER include/spdk/ftl.h 00:02:17.000 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.000 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.000 CC app/spdk_dd/spdk_dd.o 00:02:17.000 TEST_HEADER include/spdk/hexlify.h 00:02:17.000 TEST_HEADER include/spdk/histogram_data.h 00:02:17.000 TEST_HEADER include/spdk/idxd.h 00:02:17.000 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.000 TEST_HEADER include/spdk/init.h 00:02:17.000 TEST_HEADER include/spdk/ioat.h 00:02:17.000 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.000 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.000 TEST_HEADER include/spdk/json.h 00:02:17.000 CC app/nvmf_tgt/nvmf_main.o 00:02:17.000 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.000 TEST_HEADER include/spdk/keyring.h 00:02:17.000 TEST_HEADER include/spdk/keyring_module.h 00:02:17.000 TEST_HEADER include/spdk/likely.h 00:02:17.000 TEST_HEADER include/spdk/log.h 00:02:17.000 TEST_HEADER include/spdk/lvol.h 00:02:17.000 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.000 TEST_HEADER include/spdk/memory.h 00:02:17.000 TEST_HEADER include/spdk/mmio.h 00:02:17.000 TEST_HEADER include/spdk/nbd.h 00:02:17.000 TEST_HEADER include/spdk/net.h 00:02:17.000 TEST_HEADER include/spdk/notify.h 00:02:17.000 TEST_HEADER include/spdk/nvme.h 00:02:17.000 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.000 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.000 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.000 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.000 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.000 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.000 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.000 CC examples/util/zipf/zipf.o 00:02:17.000 CC test/thread/poller_perf/poller_perf.o 00:02:17.000 TEST_HEADER include/spdk/nvmf.h 00:02:17.000 CC test/app/histogram_perf/histogram_perf.o 00:02:17.000 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.000 CC examples/ioat/perf/perf.o 00:02:17.000 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.000 CC app/fio/nvme/fio_plugin.o 00:02:17.000 TEST_HEADER include/spdk/opal.h 00:02:17.000 TEST_HEADER include/spdk/opal_spec.h 00:02:17.000 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.000 CC examples/ioat/verify/verify.o 00:02:17.000 TEST_HEADER include/spdk/pci_ids.h 00:02:17.000 CC test/env/vtophys/vtophys.o 00:02:17.000 CC test/app/stub/stub.o 00:02:17.000 TEST_HEADER include/spdk/pipe.h 00:02:17.000 CC test/app/jsoncat/jsoncat.o 00:02:17.000 CC test/env/memory/memory_ut.o 00:02:17.000 CC app/spdk_tgt/spdk_tgt.o 00:02:17.000 TEST_HEADER include/spdk/queue.h 00:02:17.000 CC test/env/pci/pci_ut.o 00:02:17.000 TEST_HEADER include/spdk/reduce.h 00:02:17.000 TEST_HEADER include/spdk/rpc.h 00:02:17.000 TEST_HEADER include/spdk/scheduler.h 00:02:17.000 TEST_HEADER include/spdk/scsi.h 00:02:17.000 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.000 TEST_HEADER include/spdk/sock.h 00:02:17.000 TEST_HEADER include/spdk/stdinc.h 00:02:17.000 TEST_HEADER include/spdk/string.h 00:02:17.000 CC test/dma/test_dma/test_dma.o 00:02:17.000 TEST_HEADER include/spdk/thread.h 00:02:17.000 TEST_HEADER include/spdk/trace.h 00:02:17.000 TEST_HEADER include/spdk/trace_parser.h 00:02:17.000 TEST_HEADER include/spdk/tree.h 00:02:17.000 CC test/app/bdev_svc/bdev_svc.o 00:02:17.000 CC app/fio/bdev/fio_plugin.o 00:02:17.000 TEST_HEADER include/spdk/ublk.h 00:02:17.000 TEST_HEADER include/spdk/util.h 00:02:17.000 TEST_HEADER include/spdk/uuid.h 00:02:17.000 TEST_HEADER include/spdk/version.h 00:02:17.000 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.000 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.000 TEST_HEADER include/spdk/vhost.h 00:02:17.264 TEST_HEADER include/spdk/vmd.h 00:02:17.264 TEST_HEADER include/spdk/xor.h 00:02:17.264 TEST_HEADER include/spdk/zipf.h 00:02:17.264 LINK spdk_lspci 00:02:17.264 CXX test/cpp_headers/accel.o 00:02:17.264 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.264 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.264 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.264 LINK spdk_nvme_discover 00:02:17.264 LINK histogram_perf 00:02:17.264 LINK rpc_client_test 00:02:17.264 LINK poller_perf 00:02:17.264 LINK jsoncat 00:02:17.264 LINK nvmf_tgt 00:02:17.264 LINK zipf 00:02:17.264 LINK interrupt_tgt 00:02:17.264 LINK vtophys 00:02:17.532 LINK stub 00:02:17.532 LINK spdk_trace_record 00:02:17.532 LINK env_dpdk_post_init 00:02:17.532 CXX test/cpp_headers/accel_module.o 00:02:17.532 LINK bdev_svc 00:02:17.532 LINK verify 00:02:17.532 LINK iscsi_tgt 00:02:17.532 LINK ioat_perf 00:02:17.532 LINK spdk_tgt 00:02:17.532 CXX test/cpp_headers/assert.o 00:02:17.532 CXX test/cpp_headers/barrier.o 00:02:17.532 CXX test/cpp_headers/base64.o 00:02:17.532 CXX test/cpp_headers/bdev.o 00:02:17.532 CXX test/cpp_headers/bdev_module.o 00:02:17.532 CXX test/cpp_headers/bdev_zone.o 00:02:17.532 LINK spdk_dd 00:02:17.791 CXX test/cpp_headers/bit_array.o 00:02:17.791 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.791 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:17.791 CXX test/cpp_headers/bit_pool.o 00:02:17.791 CXX test/cpp_headers/blob_bdev.o 00:02:17.791 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.791 LINK spdk_trace 00:02:17.791 LINK test_dma 00:02:17.791 LINK pci_ut 00:02:17.791 LINK nvme_fuzz 00:02:18.053 CC test/event/event_perf/event_perf.o 00:02:18.053 CC test/event/reactor/reactor.o 00:02:18.053 CXX test/cpp_headers/blobfs.o 00:02:18.053 CXX test/cpp_headers/blob.o 00:02:18.053 LINK spdk_bdev 00:02:18.053 CXX test/cpp_headers/conf.o 00:02:18.053 CXX test/cpp_headers/config.o 00:02:18.053 CC examples/sock/hello_world/hello_sock.o 00:02:18.053 CXX test/cpp_headers/cpuset.o 00:02:18.053 CXX test/cpp_headers/crc16.o 00:02:18.053 CXX test/cpp_headers/crc32.o 00:02:18.053 CC examples/thread/thread/thread_ex.o 00:02:18.053 CC test/event/reactor_perf/reactor_perf.o 00:02:18.053 CXX test/cpp_headers/crc64.o 00:02:18.053 CXX test/cpp_headers/dif.o 00:02:18.053 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.053 LINK spdk_nvme 00:02:18.053 CC examples/vmd/led/led.o 00:02:18.053 CC test/event/app_repeat/app_repeat.o 00:02:18.315 CXX test/cpp_headers/dma.o 00:02:18.315 CC examples/idxd/perf/perf.o 00:02:18.315 LINK event_perf 00:02:18.315 CXX test/cpp_headers/endian.o 00:02:18.315 CC test/event/scheduler/scheduler.o 00:02:18.315 LINK reactor 00:02:18.315 LINK spdk_nvme_perf 00:02:18.315 LINK mem_callbacks 00:02:18.315 CXX test/cpp_headers/env_dpdk.o 00:02:18.315 CXX test/cpp_headers/env.o 00:02:18.315 CXX test/cpp_headers/event.o 00:02:18.315 LINK reactor_perf 00:02:18.315 CXX test/cpp_headers/fd_group.o 00:02:18.315 LINK lsvmd 00:02:18.581 CXX test/cpp_headers/fd.o 00:02:18.581 CXX test/cpp_headers/file.o 00:02:18.581 CC app/vhost/vhost.o 00:02:18.581 CXX test/cpp_headers/ftl.o 00:02:18.581 LINK app_repeat 00:02:18.581 CXX test/cpp_headers/gpt_spec.o 00:02:18.581 LINK led 00:02:18.581 LINK hello_sock 00:02:18.581 LINK vhost_fuzz 00:02:18.581 LINK spdk_nvme_identify 00:02:18.581 CXX test/cpp_headers/hexlify.o 00:02:18.581 CC test/blobfs/mkfs/mkfs.o 00:02:18.581 CC test/accel/dif/dif.o 00:02:18.581 CXX test/cpp_headers/histogram_data.o 00:02:18.581 LINK thread 00:02:18.581 CC test/nvme/aer/aer.o 00:02:18.581 LINK spdk_top 00:02:18.581 CC test/nvme/sgl/sgl.o 00:02:18.581 CC test/nvme/reset/reset.o 00:02:18.852 CC test/nvme/e2edp/nvme_dp.o 00:02:18.852 CXX test/cpp_headers/idxd.o 00:02:18.852 CXX test/cpp_headers/idxd_spec.o 00:02:18.852 CC test/nvme/overhead/overhead.o 00:02:18.852 LINK scheduler 00:02:18.852 CC test/lvol/esnap/esnap.o 00:02:18.852 LINK idxd_perf 00:02:18.852 CXX test/cpp_headers/init.o 00:02:18.852 CXX test/cpp_headers/ioat.o 00:02:18.852 CC test/nvme/err_injection/err_injection.o 00:02:18.852 CXX test/cpp_headers/ioat_spec.o 00:02:18.852 LINK vhost 00:02:18.852 CC test/nvme/reserve/reserve.o 00:02:18.852 CC test/nvme/simple_copy/simple_copy.o 00:02:18.852 CC test/nvme/startup/startup.o 00:02:18.852 CC test/nvme/connect_stress/connect_stress.o 00:02:19.117 CC test/nvme/boot_partition/boot_partition.o 00:02:19.117 CXX test/cpp_headers/iscsi_spec.o 00:02:19.117 CC test/nvme/compliance/nvme_compliance.o 00:02:19.117 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.117 CXX test/cpp_headers/json.o 00:02:19.117 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.117 LINK mkfs 00:02:19.117 LINK memory_ut 00:02:19.117 CXX test/cpp_headers/jsonrpc.o 00:02:19.117 CC test/nvme/fdp/fdp.o 00:02:19.117 CC examples/nvme/hello_world/hello_world.o 00:02:19.117 LINK aer 00:02:19.117 CXX test/cpp_headers/keyring.o 00:02:19.117 CXX test/cpp_headers/keyring_module.o 00:02:19.380 CC examples/nvme/reconnect/reconnect.o 00:02:19.380 LINK reset 00:02:19.380 LINK err_injection 00:02:19.380 LINK sgl 00:02:19.380 LINK overhead 00:02:19.380 CC examples/accel/perf/accel_perf.o 00:02:19.380 LINK nvme_dp 00:02:19.380 LINK reserve 00:02:19.380 LINK startup 00:02:19.380 CC test/nvme/cuse/cuse.o 00:02:19.380 LINK connect_stress 00:02:19.380 LINK boot_partition 00:02:19.380 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.380 LINK simple_copy 00:02:19.380 CXX test/cpp_headers/likely.o 00:02:19.380 LINK fused_ordering 00:02:19.380 LINK dif 00:02:19.380 LINK doorbell_aers 00:02:19.645 CC examples/blob/hello_world/hello_blob.o 00:02:19.645 CC examples/nvme/arbitration/arbitration.o 00:02:19.645 CXX test/cpp_headers/log.o 00:02:19.645 CXX test/cpp_headers/lvol.o 00:02:19.645 CXX test/cpp_headers/memory.o 00:02:19.645 CXX test/cpp_headers/mmio.o 00:02:19.645 CC examples/blob/cli/blobcli.o 00:02:19.645 CC examples/nvme/hotplug/hotplug.o 00:02:19.645 CXX test/cpp_headers/nbd.o 00:02:19.645 LINK nvme_compliance 00:02:19.645 CXX test/cpp_headers/net.o 00:02:19.645 CXX test/cpp_headers/notify.o 00:02:19.645 CXX test/cpp_headers/nvme.o 00:02:19.645 CXX test/cpp_headers/nvme_intel.o 00:02:19.645 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.645 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.645 LINK hello_world 00:02:19.645 CXX test/cpp_headers/nvme_spec.o 00:02:19.645 CC examples/nvme/abort/abort.o 00:02:19.645 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.645 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.645 CXX test/cpp_headers/nvme_zns.o 00:02:19.645 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.908 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.908 LINK fdp 00:02:19.908 CXX test/cpp_headers/nvmf.o 00:02:19.908 CXX test/cpp_headers/nvmf_spec.o 00:02:19.908 CXX test/cpp_headers/nvmf_transport.o 00:02:19.908 CXX test/cpp_headers/opal.o 00:02:19.908 CXX test/cpp_headers/opal_spec.o 00:02:19.908 CXX test/cpp_headers/pci_ids.o 00:02:19.908 LINK hello_blob 00:02:19.908 CXX test/cpp_headers/pipe.o 00:02:19.908 LINK reconnect 00:02:19.908 CXX test/cpp_headers/queue.o 00:02:20.173 CXX test/cpp_headers/reduce.o 00:02:20.173 CXX test/cpp_headers/rpc.o 00:02:20.173 CXX test/cpp_headers/scheduler.o 00:02:20.173 CXX test/cpp_headers/scsi.o 00:02:20.173 LINK cmb_copy 00:02:20.173 LINK pmr_persistence 00:02:20.173 CXX test/cpp_headers/scsi_spec.o 00:02:20.173 CXX test/cpp_headers/sock.o 00:02:20.173 CXX test/cpp_headers/stdinc.o 00:02:20.173 CXX test/cpp_headers/string.o 00:02:20.173 CXX test/cpp_headers/thread.o 00:02:20.173 LINK hotplug 00:02:20.173 CXX test/cpp_headers/trace.o 00:02:20.173 CXX test/cpp_headers/trace_parser.o 00:02:20.173 CXX test/cpp_headers/tree.o 00:02:20.173 CXX test/cpp_headers/ublk.o 00:02:20.173 CXX test/cpp_headers/util.o 00:02:20.173 LINK accel_perf 00:02:20.173 LINK nvme_manage 00:02:20.173 CXX test/cpp_headers/uuid.o 00:02:20.173 CXX test/cpp_headers/version.o 00:02:20.435 LINK arbitration 00:02:20.435 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.435 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.435 CC test/bdev/bdevio/bdevio.o 00:02:20.435 CXX test/cpp_headers/vhost.o 00:02:20.435 CXX test/cpp_headers/vmd.o 00:02:20.435 LINK iscsi_fuzz 00:02:20.435 CXX test/cpp_headers/xor.o 00:02:20.435 CXX test/cpp_headers/zipf.o 00:02:20.435 LINK blobcli 00:02:20.435 LINK abort 00:02:20.693 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.693 CC examples/bdev/bdevperf/bdevperf.o 00:02:20.952 LINK bdevio 00:02:21.210 LINK hello_bdev 00:02:21.210 LINK cuse 00:02:21.468 LINK bdevperf 00:02:22.034 CC examples/nvmf/nvmf/nvmf.o 00:02:22.292 LINK nvmf 00:02:24.828 LINK esnap 00:02:24.828 00:02:24.828 real 0m55.892s 00:02:24.828 user 11m14.270s 00:02:24.828 sys 2m22.964s 00:02:24.828 10:09:14 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.828 10:09:14 make -- common/autotest_common.sh@10 -- $ set +x 00:02:24.828 ************************************ 00:02:24.828 END TEST make 00:02:24.828 ************************************ 00:02:24.828 10:09:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:24.828 10:09:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:24.828 10:09:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:24.828 10:09:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:24.828 10:09:14 -- pm/common@44 -- $ pid=1327814 00:02:24.828 10:09:14 -- pm/common@50 -- $ kill -TERM 1327814 00:02:24.828 10:09:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:24.828 10:09:14 -- pm/common@44 -- $ pid=1327816 00:02:24.828 10:09:14 -- pm/common@50 -- $ kill -TERM 1327816 00:02:24.828 10:09:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:24.828 10:09:14 -- pm/common@44 -- $ pid=1327818 00:02:24.828 10:09:14 -- pm/common@50 -- $ kill -TERM 1327818 00:02:24.828 10:09:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:24.828 10:09:14 -- pm/common@44 -- $ pid=1327848 00:02:24.828 10:09:14 -- pm/common@50 -- $ sudo -E kill -TERM 1327848 00:02:24.828 10:09:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:24.828 10:09:14 -- nvmf/common.sh@7 -- # uname -s 00:02:24.828 10:09:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:24.828 10:09:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:24.828 10:09:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:24.828 10:09:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:24.828 10:09:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:24.828 10:09:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:24.828 10:09:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:24.828 10:09:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:24.828 10:09:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:24.828 10:09:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:24.828 10:09:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:02:24.828 10:09:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:02:24.828 10:09:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:24.828 10:09:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:24.828 10:09:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:24.828 10:09:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:24.828 10:09:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:24.828 10:09:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:24.828 10:09:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.828 10:09:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.828 10:09:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.828 10:09:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.828 10:09:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.828 10:09:14 -- paths/export.sh@5 -- # export PATH 00:02:24.828 10:09:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.828 10:09:14 -- nvmf/common.sh@47 -- # : 0 00:02:24.828 10:09:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:24.828 10:09:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:24.828 10:09:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:24.828 10:09:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:24.828 10:09:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:24.828 10:09:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:24.828 10:09:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:24.828 10:09:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:24.828 10:09:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:24.828 10:09:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:24.828 10:09:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:24.828 10:09:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:24.828 10:09:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.828 10:09:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:24.828 10:09:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.828 10:09:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:24.828 10:09:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:24.828 10:09:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:24.828 10:09:14 -- spdk/autotest.sh@48 -- # udevadm_pid=1382744 00:02:24.828 10:09:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:24.828 10:09:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:24.828 10:09:14 -- pm/common@17 -- # local monitor 00:02:24.828 10:09:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@21 -- # date +%s 00:02:24.828 10:09:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.828 10:09:14 -- pm/common@21 -- # date +%s 00:02:24.828 10:09:14 -- pm/common@25 -- # sleep 1 00:02:24.828 10:09:14 -- pm/common@21 -- # date +%s 00:02:24.828 10:09:14 -- pm/common@21 -- # date +%s 00:02:25.089 10:09:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721894954 00:02:25.089 10:09:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721894954 00:02:25.089 10:09:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721894954 00:02:25.089 10:09:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721894954 00:02:25.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721894954_collect-vmstat.pm.log 00:02:25.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721894954_collect-cpu-load.pm.log 00:02:25.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721894954_collect-cpu-temp.pm.log 00:02:25.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721894954_collect-bmc-pm.bmc.pm.log 00:02:26.029 10:09:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.029 10:09:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.029 10:09:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:26.029 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:02:26.029 10:09:15 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.029 10:09:15 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:26.029 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:02:26.029 10:09:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.029 10:09:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.029 10:09:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.029 10:09:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.029 10:09:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.029 10:09:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.029 10:09:15 -- common/autotest_common.sh@1455 -- # uname 00:02:26.029 10:09:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:26.029 10:09:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.029 10:09:15 -- common/autotest_common.sh@1475 -- # uname 00:02:26.029 10:09:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:26.029 10:09:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.029 10:09:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.029 10:09:15 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.029 10:09:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.029 10:09:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.029 --rc lcov_branch_coverage=1 00:02:26.029 --rc lcov_function_coverage=1 00:02:26.029 --rc genhtml_branch_coverage=1 00:02:26.029 --rc genhtml_function_coverage=1 00:02:26.029 --rc genhtml_legend=1 00:02:26.029 --rc geninfo_all_blocks=1 00:02:26.029 ' 00:02:26.029 10:09:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.029 --rc lcov_branch_coverage=1 00:02:26.029 --rc lcov_function_coverage=1 00:02:26.029 --rc genhtml_branch_coverage=1 00:02:26.029 --rc genhtml_function_coverage=1 00:02:26.029 --rc genhtml_legend=1 00:02:26.029 --rc geninfo_all_blocks=1 00:02:26.029 ' 00:02:26.029 10:09:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.029 --rc lcov_branch_coverage=1 00:02:26.029 --rc lcov_function_coverage=1 00:02:26.029 --rc genhtml_branch_coverage=1 00:02:26.029 --rc genhtml_function_coverage=1 00:02:26.029 --rc genhtml_legend=1 00:02:26.029 --rc geninfo_all_blocks=1 00:02:26.029 --no-external' 00:02:26.029 10:09:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.029 --rc lcov_branch_coverage=1 00:02:26.029 --rc lcov_function_coverage=1 00:02:26.029 --rc genhtml_branch_coverage=1 00:02:26.029 --rc genhtml_function_coverage=1 00:02:26.029 --rc genhtml_legend=1 00:02:26.029 --rc geninfo_all_blocks=1 00:02:26.029 --no-external' 00:02:26.029 10:09:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.029 lcov: LCOV version 1.14 00:02:26.029 10:09:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.382 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:00.565 10:09:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:00.565 10:09:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:00.565 10:09:49 -- common/autotest_common.sh@10 -- # set +x 00:03:00.565 10:09:49 -- spdk/autotest.sh@91 -- # rm -f 00:03:00.565 10:09:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.501 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:03:01.501 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:03:01.501 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:03:01.501 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:03:01.501 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:03:01.501 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:03:01.501 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:03:01.501 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:03:01.501 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:03:01.501 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:03:01.501 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:03:01.501 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:03:01.501 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:03:01.501 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:03:01.501 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:03:01.501 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:03:01.501 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:03:01.501 10:09:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:01.501 10:09:51 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:01.501 10:09:51 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:01.501 10:09:51 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:01.501 10:09:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:01.501 10:09:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:01.501 10:09:51 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:01.501 10:09:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.501 10:09:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:01.501 10:09:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:01.501 10:09:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.501 10:09:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:01.501 10:09:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:01.501 10:09:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:01.501 10:09:51 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.501 No valid GPT data, bailing 00:03:01.501 10:09:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.501 10:09:51 -- scripts/common.sh@391 -- # pt= 00:03:01.501 10:09:51 -- scripts/common.sh@392 -- # return 1 00:03:01.501 10:09:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.501 1+0 records in 00:03:01.501 1+0 records out 00:03:01.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00250432 s, 419 MB/s 00:03:01.501 10:09:51 -- spdk/autotest.sh@118 -- # sync 00:03:01.501 10:09:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.501 10:09:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.501 10:09:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:03.405 10:09:52 -- spdk/autotest.sh@124 -- # uname -s 00:03:03.405 10:09:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:03.405 10:09:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:03.405 10:09:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:03.405 10:09:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:03.405 10:09:52 -- common/autotest_common.sh@10 -- # set +x 00:03:03.405 ************************************ 00:03:03.405 START TEST setup.sh 00:03:03.405 ************************************ 00:03:03.405 10:09:52 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:03.405 * Looking for test storage... 00:03:03.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.406 10:09:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:03.406 10:09:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:03.406 10:09:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:03.406 10:09:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:03.406 10:09:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:03.406 10:09:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.406 ************************************ 00:03:03.406 START TEST acl 00:03:03.406 ************************************ 00:03:03.406 10:09:52 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:03.406 * Looking for test storage... 00:03:03.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.406 10:09:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:03.406 10:09:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:03.406 10:09:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.406 10:09:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.795 10:09:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.795 10:09:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:04.795 10:09:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.795 10:09:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:04.795 10:09:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.795 10:09:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:05.364 Hugepages 00:03:05.364 node hugesize free / total 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.364 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 00:03:05.623 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:05.623 10:09:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.623 10:09:55 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.623 10:09:55 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.623 10:09:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.623 ************************************ 00:03:05.623 START TEST denied 00:03:05.623 ************************************ 00:03:05.623 10:09:55 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:05.623 10:09:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:03:05.623 10:09:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:05.623 10:09:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.623 10:09:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.623 10:09:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:03:07.000 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.000 10:09:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.539 00:03:09.539 real 0m3.413s 00:03:09.539 user 0m1.053s 00:03:09.539 sys 0m1.601s 00:03:09.539 10:09:58 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:09.539 10:09:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:09.539 ************************************ 00:03:09.539 END TEST denied 00:03:09.539 ************************************ 00:03:09.539 10:09:58 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:09.539 10:09:58 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:09.539 10:09:58 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:09.539 10:09:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.539 ************************************ 00:03:09.539 START TEST allowed 00:03:09.539 ************************************ 00:03:09.539 10:09:58 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:09.539 10:09:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:03:09.539 10:09:58 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:03:09.539 10:09:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:09.539 10:09:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.539 10:09:58 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.446 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.446 10:10:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:11.446 10:10:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:11.446 10:10:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:11.446 10:10:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.446 10:10:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.822 00:03:12.822 real 0m3.479s 00:03:12.822 user 0m0.959s 00:03:12.822 sys 0m1.533s 00:03:12.822 10:10:02 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.822 10:10:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:12.822 ************************************ 00:03:12.822 END TEST allowed 00:03:12.822 ************************************ 00:03:12.822 00:03:12.822 real 0m9.298s 00:03:12.822 user 0m2.975s 00:03:12.822 sys 0m4.699s 00:03:12.822 10:10:02 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.822 10:10:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:12.822 ************************************ 00:03:12.822 END TEST acl 00:03:12.822 ************************************ 00:03:12.822 10:10:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.822 10:10:02 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.822 10:10:02 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.822 10:10:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.822 ************************************ 00:03:12.822 START TEST hugepages 00:03:12.822 ************************************ 00:03:12.822 10:10:02 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.822 * Looking for test storage... 00:03:12.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31076040 kB' 'MemAvailable: 35041740 kB' 'Buffers: 2704 kB' 'Cached: 14679376 kB' 'SwapCached: 0 kB' 'Active: 11527080 kB' 'Inactive: 3701476 kB' 'Active(anon): 11062292 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549680 kB' 'Mapped: 207820 kB' 'Shmem: 10515816 kB' 'KReclaimable: 411100 kB' 'Slab: 708224 kB' 'SReclaimable: 411100 kB' 'SUnreclaim: 297124 kB' 'KernelStack: 10144 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12064888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190304 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.822 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.823 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.824 10:10:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.824 10:10:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.824 10:10:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.824 10:10:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.824 ************************************ 00:03:12.824 START TEST default_setup 00:03:12.824 ************************************ 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.824 10:10:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.762 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:13.762 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:13.762 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:14.021 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:14.966 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33166484 kB' 'MemAvailable: 37132192 kB' 'Buffers: 2704 kB' 'Cached: 14679460 kB' 'SwapCached: 0 kB' 'Active: 11546428 kB' 'Inactive: 3701476 kB' 'Active(anon): 11081640 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568880 kB' 'Mapped: 207912 kB' 'Shmem: 10515900 kB' 'KReclaimable: 411108 kB' 'Slab: 708416 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297308 kB' 'KernelStack: 10256 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12084404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190688 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.966 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.967 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33169684 kB' 'MemAvailable: 37135392 kB' 'Buffers: 2704 kB' 'Cached: 14679460 kB' 'SwapCached: 0 kB' 'Active: 11545236 kB' 'Inactive: 3701476 kB' 'Active(anon): 11080448 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567676 kB' 'Mapped: 207928 kB' 'Shmem: 10515900 kB' 'KReclaimable: 411108 kB' 'Slab: 708384 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297276 kB' 'KernelStack: 10336 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190448 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.968 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.969 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33169432 kB' 'MemAvailable: 37135140 kB' 'Buffers: 2704 kB' 'Cached: 14679476 kB' 'SwapCached: 0 kB' 'Active: 11544016 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079228 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566504 kB' 'Mapped: 207852 kB' 'Shmem: 10515916 kB' 'KReclaimable: 411108 kB' 'Slab: 708220 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297112 kB' 'KernelStack: 10048 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.970 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.971 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.972 nr_hugepages=1024 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.972 resv_hugepages=0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.972 surplus_hugepages=0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.972 anon_hugepages=0 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33169432 kB' 'MemAvailable: 37135140 kB' 'Buffers: 2704 kB' 'Cached: 14679480 kB' 'SwapCached: 0 kB' 'Active: 11544128 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079340 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566612 kB' 'Mapped: 207852 kB' 'Shmem: 10515920 kB' 'KReclaimable: 411108 kB' 'Slab: 708220 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297112 kB' 'KernelStack: 10032 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.972 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.973 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19401652 kB' 'MemUsed: 13480096 kB' 'SwapCached: 0 kB' 'Active: 6949400 kB' 'Inactive: 3397596 kB' 'Active(anon): 6737992 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10082972 kB' 'Mapped: 92848 kB' 'AnonPages: 267168 kB' 'Shmem: 6473968 kB' 'KernelStack: 5688 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271812 kB' 'Slab: 423924 kB' 'SReclaimable: 271812 kB' 'SUnreclaim: 152112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.974 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.975 node0=1024 expecting 1024 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.975 00:03:14.975 real 0m2.169s 00:03:14.975 user 0m0.597s 00:03:14.975 sys 0m0.739s 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.975 10:10:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:14.975 ************************************ 00:03:14.975 END TEST default_setup 00:03:14.975 ************************************ 00:03:14.975 10:10:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:14.975 10:10:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.975 10:10:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.975 10:10:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.975 ************************************ 00:03:14.975 START TEST per_node_1G_alloc 00:03:14.975 ************************************ 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.975 10:10:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.915 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:15.915 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.915 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:15.915 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:15.915 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:15.915 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:15.915 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:15.915 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:15.915 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:15.915 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:15.915 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:15.915 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:15.915 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:15.915 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:15.915 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:15.915 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:15.915 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33152620 kB' 'MemAvailable: 37118328 kB' 'Buffers: 2704 kB' 'Cached: 14679568 kB' 'SwapCached: 0 kB' 'Active: 11544776 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079988 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567176 kB' 'Mapped: 208020 kB' 'Shmem: 10516008 kB' 'KReclaimable: 411108 kB' 'Slab: 708424 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297316 kB' 'KernelStack: 10016 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190496 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33153312 kB' 'MemAvailable: 37119020 kB' 'Buffers: 2704 kB' 'Cached: 14679572 kB' 'SwapCached: 0 kB' 'Active: 11544764 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079976 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567160 kB' 'Mapped: 207952 kB' 'Shmem: 10516012 kB' 'KReclaimable: 411108 kB' 'Slab: 708424 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297316 kB' 'KernelStack: 10064 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190448 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.183 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.184 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33153312 kB' 'MemAvailable: 37119020 kB' 'Buffers: 2704 kB' 'Cached: 14679572 kB' 'SwapCached: 0 kB' 'Active: 11544144 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079356 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566520 kB' 'Mapped: 207872 kB' 'Shmem: 10516012 kB' 'KReclaimable: 411108 kB' 'Slab: 708392 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297284 kB' 'KernelStack: 10048 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190448 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.185 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.186 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.187 nr_hugepages=1024 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.187 resv_hugepages=0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.187 surplus_hugepages=0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.187 anon_hugepages=0 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33153312 kB' 'MemAvailable: 37119020 kB' 'Buffers: 2704 kB' 'Cached: 14679612 kB' 'SwapCached: 0 kB' 'Active: 11544468 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079680 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566796 kB' 'Mapped: 207872 kB' 'Shmem: 10516052 kB' 'KReclaimable: 411108 kB' 'Slab: 708392 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297284 kB' 'KernelStack: 10048 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190448 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.187 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.188 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20438664 kB' 'MemUsed: 12443084 kB' 'SwapCached: 0 kB' 'Active: 6949480 kB' 'Inactive: 3397596 kB' 'Active(anon): 6738072 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10082996 kB' 'Mapped: 92868 kB' 'AnonPages: 267212 kB' 'Shmem: 6473992 kB' 'KernelStack: 5672 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271812 kB' 'Slab: 424024 kB' 'SReclaimable: 271812 kB' 'SUnreclaim: 152212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.189 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.190 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12715600 kB' 'MemUsed: 6693832 kB' 'SwapCached: 0 kB' 'Active: 4595048 kB' 'Inactive: 303880 kB' 'Active(anon): 4341668 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4599344 kB' 'Mapped: 115004 kB' 'AnonPages: 299592 kB' 'Shmem: 4042084 kB' 'KernelStack: 4376 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139296 kB' 'Slab: 284368 kB' 'SReclaimable: 139296 kB' 'SUnreclaim: 145072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.191 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.192 node0=512 expecting 512 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.192 node1=512 expecting 512 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.192 00:03:16.192 real 0m1.171s 00:03:16.192 user 0m0.547s 00:03:16.192 sys 0m0.657s 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.192 10:10:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.192 ************************************ 00:03:16.192 END TEST per_node_1G_alloc 00:03:16.192 ************************************ 00:03:16.192 10:10:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:16.192 10:10:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.192 10:10:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.192 10:10:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.192 ************************************ 00:03:16.192 START TEST even_2G_alloc 00:03:16.192 ************************************ 00:03:16.192 10:10:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:16.192 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:16.192 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.192 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.192 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.193 10:10:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.135 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:17.135 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.135 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:17.135 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:17.135 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:17.135 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:17.135 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:17.135 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:17.135 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:17.135 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:17.135 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:17.135 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:17.135 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:17.135 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:17.135 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:17.135 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:17.135 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33153884 kB' 'MemAvailable: 37119592 kB' 'Buffers: 2704 kB' 'Cached: 14679700 kB' 'SwapCached: 0 kB' 'Active: 11545396 kB' 'Inactive: 3701476 kB' 'Active(anon): 11080608 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567664 kB' 'Mapped: 207944 kB' 'Shmem: 10516140 kB' 'KReclaimable: 411108 kB' 'Slab: 708480 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297372 kB' 'KernelStack: 10064 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12087264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190448 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.135 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.136 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33154344 kB' 'MemAvailable: 37120052 kB' 'Buffers: 2704 kB' 'Cached: 14679700 kB' 'SwapCached: 0 kB' 'Active: 11545216 kB' 'Inactive: 3701476 kB' 'Active(anon): 11080428 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567484 kB' 'Mapped: 207956 kB' 'Shmem: 10516140 kB' 'KReclaimable: 411108 kB' 'Slab: 708512 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297404 kB' 'KernelStack: 10016 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190400 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.137 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.138 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.403 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33154344 kB' 'MemAvailable: 37120052 kB' 'Buffers: 2704 kB' 'Cached: 14679720 kB' 'SwapCached: 0 kB' 'Active: 11544472 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079684 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566672 kB' 'Mapped: 207880 kB' 'Shmem: 10516160 kB' 'KReclaimable: 411108 kB' 'Slab: 708504 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297396 kB' 'KernelStack: 10000 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.404 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.405 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.406 nr_hugepages=1024 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.406 resv_hugepages=0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.406 surplus_hugepages=0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.406 anon_hugepages=0 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33158692 kB' 'MemAvailable: 37124400 kB' 'Buffers: 2704 kB' 'Cached: 14679748 kB' 'SwapCached: 0 kB' 'Active: 11544572 kB' 'Inactive: 3701476 kB' 'Active(anon): 11079784 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566776 kB' 'Mapped: 207880 kB' 'Shmem: 10516188 kB' 'KReclaimable: 411108 kB' 'Slab: 708504 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297396 kB' 'KernelStack: 10000 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12082612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.406 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20435852 kB' 'MemUsed: 12445896 kB' 'SwapCached: 0 kB' 'Active: 6949312 kB' 'Inactive: 3397596 kB' 'Active(anon): 6737904 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10083080 kB' 'Mapped: 92876 kB' 'AnonPages: 266976 kB' 'Shmem: 6474076 kB' 'KernelStack: 5672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271812 kB' 'Slab: 423964 kB' 'SReclaimable: 271812 kB' 'SUnreclaim: 152152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.410 10:10:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12722876 kB' 'MemUsed: 6686556 kB' 'SwapCached: 0 kB' 'Active: 4595380 kB' 'Inactive: 303880 kB' 'Active(anon): 4342000 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4599392 kB' 'Mapped: 115004 kB' 'AnonPages: 299896 kB' 'Shmem: 4042132 kB' 'KernelStack: 4360 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139296 kB' 'Slab: 284540 kB' 'SReclaimable: 139296 kB' 'SUnreclaim: 145244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.410 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.411 node0=512 expecting 512 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.411 node1=512 expecting 512 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.411 00:03:17.411 real 0m1.150s 00:03:17.411 user 0m0.511s 00:03:17.411 sys 0m0.667s 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.411 10:10:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.411 ************************************ 00:03:17.411 END TEST even_2G_alloc 00:03:17.411 ************************************ 00:03:17.411 10:10:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.411 10:10:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.411 10:10:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.411 10:10:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.411 ************************************ 00:03:17.411 START TEST odd_alloc 00:03:17.411 ************************************ 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:17.411 10:10:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.412 10:10:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.354 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:18.354 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.354 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:18.354 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:18.354 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:18.354 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:18.354 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:18.354 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:18.354 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:18.354 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:18.354 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:18.354 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:18.354 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:18.354 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:18.354 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:18.354 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:18.354 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33163044 kB' 'MemAvailable: 37128752 kB' 'Buffers: 2704 kB' 'Cached: 14679836 kB' 'SwapCached: 0 kB' 'Active: 11541872 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077084 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563928 kB' 'Mapped: 206860 kB' 'Shmem: 10516276 kB' 'KReclaimable: 411108 kB' 'Slab: 708340 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297232 kB' 'KernelStack: 9968 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12071300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.354 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.355 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33162792 kB' 'MemAvailable: 37128500 kB' 'Buffers: 2704 kB' 'Cached: 14679836 kB' 'SwapCached: 0 kB' 'Active: 11542352 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077564 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564420 kB' 'Mapped: 206796 kB' 'Shmem: 10516276 kB' 'KReclaimable: 411108 kB' 'Slab: 708336 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297228 kB' 'KernelStack: 10000 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12071320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.356 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33162460 kB' 'MemAvailable: 37128168 kB' 'Buffers: 2704 kB' 'Cached: 14679856 kB' 'SwapCached: 0 kB' 'Active: 11541816 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077028 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563892 kB' 'Mapped: 206796 kB' 'Shmem: 10516296 kB' 'KReclaimable: 411108 kB' 'Slab: 708364 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297256 kB' 'KernelStack: 10000 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12071340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:18.357 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.358 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:18.359 nr_hugepages=1025 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.359 resv_hugepages=0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.359 surplus_hugepages=0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.359 anon_hugepages=0 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.359 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.625 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33162460 kB' 'MemAvailable: 37128168 kB' 'Buffers: 2704 kB' 'Cached: 14679876 kB' 'SwapCached: 0 kB' 'Active: 11541808 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077020 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563892 kB' 'Mapped: 206796 kB' 'Shmem: 10516316 kB' 'KReclaimable: 411108 kB' 'Slab: 708364 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297256 kB' 'KernelStack: 10000 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12071360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.626 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20444604 kB' 'MemUsed: 12437144 kB' 'SwapCached: 0 kB' 'Active: 6947740 kB' 'Inactive: 3397596 kB' 'Active(anon): 6736332 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10083128 kB' 'Mapped: 92392 kB' 'AnonPages: 265288 kB' 'Shmem: 6474124 kB' 'KernelStack: 5656 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271812 kB' 'Slab: 423912 kB' 'SReclaimable: 271812 kB' 'SUnreclaim: 152100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.627 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.628 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12717856 kB' 'MemUsed: 6691576 kB' 'SwapCached: 0 kB' 'Active: 4594112 kB' 'Inactive: 303880 kB' 'Active(anon): 4340732 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4599496 kB' 'Mapped: 114404 kB' 'AnonPages: 298604 kB' 'Shmem: 4042236 kB' 'KernelStack: 4344 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139296 kB' 'Slab: 284452 kB' 'SReclaimable: 139296 kB' 'SUnreclaim: 145156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.629 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:18.630 node0=512 expecting 513 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:18.630 node1=513 expecting 512 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:18.630 00:03:18.630 real 0m1.126s 00:03:18.630 user 0m0.525s 00:03:18.630 sys 0m0.631s 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.630 10:10:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.630 ************************************ 00:03:18.630 END TEST odd_alloc 00:03:18.630 ************************************ 00:03:18.630 10:10:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:18.630 10:10:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.630 10:10:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.630 10:10:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.630 ************************************ 00:03:18.630 START TEST custom_alloc 00:03:18.630 ************************************ 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:18.630 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.631 10:10:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.642 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:19.642 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.642 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:19.642 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:19.642 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:19.642 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:19.642 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:19.642 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:19.642 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:19.642 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:19.642 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:19.642 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:19.642 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:19.642 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:19.642 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:19.642 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:19.642 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32111728 kB' 'MemAvailable: 36077436 kB' 'Buffers: 2704 kB' 'Cached: 14679960 kB' 'SwapCached: 0 kB' 'Active: 11542296 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077508 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563868 kB' 'Mapped: 206780 kB' 'Shmem: 10516400 kB' 'KReclaimable: 411108 kB' 'Slab: 708216 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297108 kB' 'KernelStack: 9936 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12071408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.642 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.643 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32112616 kB' 'MemAvailable: 36078324 kB' 'Buffers: 2704 kB' 'Cached: 14679964 kB' 'SwapCached: 0 kB' 'Active: 11541896 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077108 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563892 kB' 'Mapped: 206816 kB' 'Shmem: 10516404 kB' 'KReclaimable: 411108 kB' 'Slab: 708208 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297100 kB' 'KernelStack: 9936 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12071428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190256 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.644 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.645 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32112908 kB' 'MemAvailable: 36078616 kB' 'Buffers: 2704 kB' 'Cached: 14679980 kB' 'SwapCached: 0 kB' 'Active: 11541904 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077116 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563872 kB' 'Mapped: 206816 kB' 'Shmem: 10516420 kB' 'KReclaimable: 411108 kB' 'Slab: 708276 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297168 kB' 'KernelStack: 9984 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12071448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190256 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.646 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.647 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.910 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.911 nr_hugepages=1536 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.911 resv_hugepages=0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.911 surplus_hugepages=0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.911 anon_hugepages=0 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.911 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32112820 kB' 'MemAvailable: 36078528 kB' 'Buffers: 2704 kB' 'Cached: 14680000 kB' 'SwapCached: 0 kB' 'Active: 11541880 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077092 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563832 kB' 'Mapped: 206816 kB' 'Shmem: 10516440 kB' 'KReclaimable: 411108 kB' 'Slab: 708276 kB' 'SReclaimable: 411108 kB' 'SUnreclaim: 297168 kB' 'KernelStack: 9968 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12071468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190256 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.912 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20444236 kB' 'MemUsed: 12437512 kB' 'SwapCached: 0 kB' 'Active: 6947724 kB' 'Inactive: 3397596 kB' 'Active(anon): 6736316 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10083124 kB' 'Mapped: 92412 kB' 'AnonPages: 265296 kB' 'Shmem: 6474120 kB' 'KernelStack: 5672 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271812 kB' 'Slab: 423924 kB' 'SReclaimable: 271812 kB' 'SUnreclaim: 152112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.913 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.914 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 11668708 kB' 'MemUsed: 7740724 kB' 'SwapCached: 0 kB' 'Active: 4594228 kB' 'Inactive: 303880 kB' 'Active(anon): 4340848 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4599620 kB' 'Mapped: 114404 kB' 'AnonPages: 298528 kB' 'Shmem: 4042360 kB' 'KernelStack: 4296 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139296 kB' 'Slab: 284352 kB' 'SReclaimable: 139296 kB' 'SUnreclaim: 145056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.915 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.916 node0=512 expecting 512 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.916 node1=1024 expecting 1024 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.916 00:03:19.916 real 0m1.255s 00:03:19.916 user 0m0.564s 00:03:19.916 sys 0m0.725s 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:19.916 10:10:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.916 ************************************ 00:03:19.916 END TEST custom_alloc 00:03:19.916 ************************************ 00:03:19.916 10:10:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.916 10:10:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:19.916 10:10:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:19.916 10:10:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.916 ************************************ 00:03:19.916 START TEST no_shrink_alloc 00:03:19.916 ************************************ 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.916 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.917 10:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.855 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:20.855 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.855 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:20.855 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:20.855 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:20.855 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:20.856 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:20.856 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:20.856 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:20.856 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:20.856 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:20.856 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:20.856 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:20.856 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:20.856 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:20.856 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:20.856 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33137180 kB' 'MemAvailable: 37102856 kB' 'Buffers: 2704 kB' 'Cached: 14680088 kB' 'SwapCached: 0 kB' 'Active: 11541860 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077072 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563736 kB' 'Mapped: 206908 kB' 'Shmem: 10516528 kB' 'KReclaimable: 411076 kB' 'Slab: 708144 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297068 kB' 'KernelStack: 10000 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12071696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190320 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.856 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.857 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33136972 kB' 'MemAvailable: 37102648 kB' 'Buffers: 2704 kB' 'Cached: 14680088 kB' 'SwapCached: 0 kB' 'Active: 11541992 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077204 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563880 kB' 'Mapped: 206904 kB' 'Shmem: 10516528 kB' 'KReclaimable: 411076 kB' 'Slab: 708144 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297068 kB' 'KernelStack: 10000 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12071712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.858 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.859 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33137648 kB' 'MemAvailable: 37103324 kB' 'Buffers: 2704 kB' 'Cached: 14680128 kB' 'SwapCached: 0 kB' 'Active: 11541460 kB' 'Inactive: 3701476 kB' 'Active(anon): 11076672 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563264 kB' 'Mapped: 206828 kB' 'Shmem: 10516568 kB' 'KReclaimable: 411076 kB' 'Slab: 708132 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297056 kB' 'KernelStack: 9952 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12071736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190272 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.124 nr_hugepages=1024 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.124 resv_hugepages=0 00:03:21.124 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.125 surplus_hugepages=0 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.125 anon_hugepages=0 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33137648 kB' 'MemAvailable: 37103324 kB' 'Buffers: 2704 kB' 'Cached: 14680128 kB' 'SwapCached: 0 kB' 'Active: 11541824 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077036 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563644 kB' 'Mapped: 206828 kB' 'Shmem: 10516568 kB' 'KReclaimable: 411076 kB' 'Slab: 708132 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297056 kB' 'KernelStack: 9968 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12071756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190272 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19391824 kB' 'MemUsed: 13489924 kB' 'SwapCached: 0 kB' 'Active: 6947392 kB' 'Inactive: 3397596 kB' 'Active(anon): 6735984 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10083160 kB' 'Mapped: 92424 kB' 'AnonPages: 264916 kB' 'Shmem: 6474156 kB' 'KernelStack: 5672 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271780 kB' 'Slab: 423784 kB' 'SReclaimable: 271780 kB' 'SUnreclaim: 152004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.128 node0=1024 expecting 1024 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.128 10:10:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.070 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:22.070 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.070 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:22.070 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:22.070 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:22.070 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:22.070 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:22.070 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:22.070 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:22.070 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:22.070 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:22.070 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:22.070 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:22.070 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:22.070 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:22.070 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:22.070 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:22.070 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33126168 kB' 'MemAvailable: 37091844 kB' 'Buffers: 2704 kB' 'Cached: 14680188 kB' 'SwapCached: 0 kB' 'Active: 11542280 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077492 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564060 kB' 'Mapped: 206840 kB' 'Shmem: 10516628 kB' 'KReclaimable: 411076 kB' 'Slab: 708328 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297252 kB' 'KernelStack: 9968 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12072128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.070 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.071 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33126600 kB' 'MemAvailable: 37092276 kB' 'Buffers: 2704 kB' 'Cached: 14680192 kB' 'SwapCached: 0 kB' 'Active: 11542452 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077664 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564172 kB' 'Mapped: 206836 kB' 'Shmem: 10516632 kB' 'KReclaimable: 411076 kB' 'Slab: 708328 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297252 kB' 'KernelStack: 9984 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12072144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190304 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.072 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.073 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33126440 kB' 'MemAvailable: 37092116 kB' 'Buffers: 2704 kB' 'Cached: 14680212 kB' 'SwapCached: 0 kB' 'Active: 11542028 kB' 'Inactive: 3701476 kB' 'Active(anon): 11077240 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563748 kB' 'Mapped: 206836 kB' 'Shmem: 10516652 kB' 'KReclaimable: 411076 kB' 'Slab: 708336 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297260 kB' 'KernelStack: 10000 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12072168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190304 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.074 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.075 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.076 nr_hugepages=1024 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.076 resv_hugepages=0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.076 surplus_hugepages=0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.076 anon_hugepages=0 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33126736 kB' 'MemAvailable: 37092412 kB' 'Buffers: 2704 kB' 'Cached: 14680212 kB' 'SwapCached: 0 kB' 'Active: 11541732 kB' 'Inactive: 3701476 kB' 'Active(anon): 11076944 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563472 kB' 'Mapped: 206836 kB' 'Shmem: 10516652 kB' 'KReclaimable: 411076 kB' 'Slab: 708336 kB' 'SReclaimable: 411076 kB' 'SUnreclaim: 297260 kB' 'KernelStack: 10000 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12072188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190304 kB' 'VmallocChunk: 0 kB' 'Percpu: 26496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3655972 kB' 'DirectMap2M: 31918080 kB' 'DirectMap1G: 25165824 kB' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.076 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.077 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.078 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.338 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19387604 kB' 'MemUsed: 13494144 kB' 'SwapCached: 0 kB' 'Active: 6947392 kB' 'Inactive: 3397596 kB' 'Active(anon): 6735984 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10083196 kB' 'Mapped: 92432 kB' 'AnonPages: 264920 kB' 'Shmem: 6474192 kB' 'KernelStack: 5656 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271780 kB' 'Slab: 423956 kB' 'SReclaimable: 271780 kB' 'SUnreclaim: 152176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.339 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.340 node0=1024 expecting 1024 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.340 00:03:22.340 real 0m2.302s 00:03:22.340 user 0m1.050s 00:03:22.340 sys 0m1.315s 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.340 10:10:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.340 ************************************ 00:03:22.340 END TEST no_shrink_alloc 00:03:22.340 ************************************ 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.340 10:10:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.340 00:03:22.340 real 0m9.603s 00:03:22.340 user 0m3.973s 00:03:22.340 sys 0m5.002s 00:03:22.340 10:10:11 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.340 10:10:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.340 ************************************ 00:03:22.340 END TEST hugepages 00:03:22.340 ************************************ 00:03:22.340 10:10:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.340 10:10:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.340 10:10:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.340 10:10:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.340 ************************************ 00:03:22.340 START TEST driver 00:03:22.340 ************************************ 00:03:22.340 10:10:11 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.340 * Looking for test storage... 00:03:22.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:22.340 10:10:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:22.340 10:10:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.340 10:10:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.880 10:10:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:24.880 10:10:14 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.880 10:10:14 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.880 10:10:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.880 ************************************ 00:03:24.880 START TEST guess_driver 00:03:24.880 ************************************ 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:24.880 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:24.880 Looking for driver=vfio-pci 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.880 10:10:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.821 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.822 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.822 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.822 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.822 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.822 10:10:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.761 10:10:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.299 00:03:29.299 real 0m4.435s 00:03:29.299 user 0m1.031s 00:03:29.299 sys 0m1.668s 00:03:29.299 10:10:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.299 10:10:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.299 ************************************ 00:03:29.299 END TEST guess_driver 00:03:29.299 ************************************ 00:03:29.299 00:03:29.299 real 0m6.770s 00:03:29.299 user 0m1.572s 00:03:29.299 sys 0m2.572s 00:03:29.299 10:10:18 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.299 10:10:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.299 ************************************ 00:03:29.299 END TEST driver 00:03:29.299 ************************************ 00:03:29.299 10:10:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:29.299 10:10:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:29.299 10:10:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:29.299 10:10:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.299 ************************************ 00:03:29.299 START TEST devices 00:03:29.299 ************************************ 00:03:29.299 10:10:18 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:29.299 * Looking for test storage... 00:03:29.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.299 10:10:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:29.299 10:10:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:29.299 10:10:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.299 10:10:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:30.680 10:10:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:30.680 No valid GPT data, bailing 00:03:30.680 10:10:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:30.680 10:10:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:30.680 10:10:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.680 10:10:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:30.680 ************************************ 00:03:30.680 START TEST nvme_mount 00:03:30.680 ************************************ 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:30.680 10:10:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:31.622 Creating new GPT entries in memory. 00:03:31.622 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:31.622 other utilities. 00:03:31.622 10:10:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:31.622 10:10:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.622 10:10:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:31.622 10:10:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:31.622 10:10:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:32.560 Creating new GPT entries in memory. 00:03:32.560 The operation has completed successfully. 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1398487 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.560 10:10:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:33.501 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.762 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.763 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.763 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.022 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:34.022 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:34.022 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:34.022 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.023 10:10:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.959 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.960 10:10:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:35.898 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.159 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.159 00:03:36.159 real 0m5.530s 00:03:36.159 user 0m1.216s 00:03:36.159 sys 0m2.042s 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:36.159 10:10:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:36.159 ************************************ 00:03:36.159 END TEST nvme_mount 00:03:36.159 ************************************ 00:03:36.159 10:10:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:36.159 10:10:25 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.159 10:10:25 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.159 10:10:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:36.159 ************************************ 00:03:36.159 START TEST dm_mount 00:03:36.159 ************************************ 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:36.159 10:10:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:37.098 Creating new GPT entries in memory. 00:03:37.098 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:37.098 other utilities. 00:03:37.098 10:10:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:37.098 10:10:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:37.098 10:10:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:37.098 10:10:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:37.098 10:10:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:38.037 Creating new GPT entries in memory. 00:03:38.037 The operation has completed successfully. 00:03:38.037 10:10:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:38.037 10:10:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.037 10:10:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.037 10:10:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.037 10:10:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:39.419 The operation has completed successfully. 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1400264 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.419 10:10:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.358 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.359 10:10:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:41.298 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:41.298 00:03:41.298 real 0m5.233s 00:03:41.298 user 0m0.800s 00:03:41.298 sys 0m1.376s 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.298 10:10:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:41.298 ************************************ 00:03:41.298 END TEST dm_mount 00:03:41.298 ************************************ 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.298 10:10:31 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:41.558 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:41.558 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:41.558 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:41.558 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.558 10:10:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:41.558 00:03:41.558 real 0m12.519s 00:03:41.558 user 0m2.649s 00:03:41.558 sys 0m4.359s 00:03:41.558 10:10:31 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.558 10:10:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.558 ************************************ 00:03:41.558 END TEST devices 00:03:41.558 ************************************ 00:03:41.558 00:03:41.558 real 0m38.452s 00:03:41.558 user 0m11.283s 00:03:41.558 sys 0m16.795s 00:03:41.558 10:10:31 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.558 10:10:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.558 ************************************ 00:03:41.558 END TEST setup.sh 00:03:41.558 ************************************ 00:03:41.817 10:10:31 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:42.756 Hugepages 00:03:42.756 node hugesize free / total 00:03:42.756 node0 1048576kB 0 / 0 00:03:42.756 node0 2048kB 2048 / 2048 00:03:42.756 node1 1048576kB 0 / 0 00:03:42.756 node1 2048kB 0 / 0 00:03:42.756 00:03:42.756 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.756 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:42.756 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:42.756 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:42.756 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:42.756 10:10:32 -- spdk/autotest.sh@130 -- # uname -s 00:03:42.756 10:10:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:42.756 10:10:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:42.756 10:10:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.693 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:43.693 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:43.693 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:43.693 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:43.952 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:43.952 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:43.952 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:43.952 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:43.952 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:44.892 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.892 10:10:34 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:45.830 10:10:35 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:45.830 10:10:35 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:45.830 10:10:35 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.830 10:10:35 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:45.830 10:10:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:45.830 10:10:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:45.830 10:10:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.830 10:10:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:45.830 10:10:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:45.830 10:10:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:45.830 10:10:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:45.830 10:10:35 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.828 Waiting for block devices as requested 00:03:46.828 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:03:47.088 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:47.088 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:47.088 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:47.348 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:47.348 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:47.348 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:47.348 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:47.607 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:47.607 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:47.607 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:47.607 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:47.866 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:47.866 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:47.866 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:48.126 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:48.126 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:48.126 10:10:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:48.126 10:10:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1502 -- # grep 0000:84:00.0/nvme/nvme 00:03:48.126 10:10:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:03:48.126 10:10:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:48.126 10:10:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:48.126 10:10:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:48.126 10:10:37 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:48.126 10:10:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:48.126 10:10:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:48.126 10:10:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:48.126 10:10:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:48.126 10:10:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:48.126 10:10:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:48.126 10:10:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:48.126 10:10:37 -- common/autotest_common.sh@1557 -- # continue 00:03:48.126 10:10:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:48.126 10:10:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:48.126 10:10:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.126 10:10:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:48.126 10:10:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.126 10:10:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.126 10:10:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.506 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:49.506 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:49.506 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:50.072 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.329 10:10:39 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:50.329 10:10:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.329 10:10:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.329 10:10:39 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:50.329 10:10:39 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:50.329 10:10:39 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.329 10:10:39 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:50.329 10:10:39 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:50.329 10:10:39 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:50.329 10:10:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:50.329 10:10:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:50.329 10:10:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.329 10:10:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:50.329 10:10:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:50.329 10:10:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:50.329 10:10:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:50.329 10:10:39 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:50.329 10:10:39 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:03:50.329 10:10:39 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:50.329 10:10:39 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:50.329 10:10:39 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:50.329 10:10:39 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:84:00.0 00:03:50.329 10:10:39 -- common/autotest_common.sh@1592 -- # [[ -z 0000:84:00.0 ]] 00:03:50.329 10:10:39 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1404276 00:03:50.329 10:10:39 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.329 10:10:39 -- common/autotest_common.sh@1598 -- # waitforlisten 1404276 00:03:50.329 10:10:39 -- common/autotest_common.sh@831 -- # '[' -z 1404276 ']' 00:03:50.329 10:10:39 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.329 10:10:39 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:50.329 10:10:39 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.329 10:10:39 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:50.329 10:10:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.329 [2024-07-25 10:10:40.049805] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:03:50.329 [2024-07-25 10:10:40.049906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404276 ] 00:03:50.329 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.587 [2024-07-25 10:10:40.110752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.587 [2024-07-25 10:10:40.230657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.844 10:10:40 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.844 10:10:40 -- common/autotest_common.sh@864 -- # return 0 00:03:50.844 10:10:40 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:50.844 10:10:40 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:50.844 10:10:40 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:03:54.126 nvme0n1 00:03:54.126 10:10:43 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:54.126 [2024-07-25 10:10:43.852145] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:54.126 [2024-07-25 10:10:43.852192] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:54.126 request: 00:03:54.126 { 00:03:54.126 "nvme_ctrlr_name": "nvme0", 00:03:54.126 "password": "test", 00:03:54.126 "method": "bdev_nvme_opal_revert", 00:03:54.126 "req_id": 1 00:03:54.126 } 00:03:54.126 Got JSON-RPC error response 00:03:54.126 response: 00:03:54.126 { 00:03:54.126 "code": -32603, 00:03:54.127 "message": "Internal error" 00:03:54.127 } 00:03:54.127 10:10:43 -- common/autotest_common.sh@1604 -- # true 00:03:54.127 10:10:43 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:54.127 10:10:43 -- common/autotest_common.sh@1608 -- # killprocess 1404276 00:03:54.127 10:10:43 -- common/autotest_common.sh@950 -- # '[' -z 1404276 ']' 00:03:54.127 10:10:43 -- common/autotest_common.sh@954 -- # kill -0 1404276 00:03:54.127 10:10:43 -- common/autotest_common.sh@955 -- # uname 00:03:54.127 10:10:43 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:54.127 10:10:43 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1404276 00:03:54.127 10:10:43 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:54.127 10:10:43 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:54.127 10:10:43 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1404276' 00:03:54.127 killing process with pid 1404276 00:03:54.127 10:10:43 -- common/autotest_common.sh@969 -- # kill 1404276 00:03:54.127 10:10:43 -- common/autotest_common.sh@974 -- # wait 1404276 00:03:56.027 10:10:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:56.027 10:10:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:56.027 10:10:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.027 10:10:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.027 10:10:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:56.027 10:10:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.027 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:03:56.027 10:10:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:56.027 10:10:45 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.027 10:10:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.027 10:10:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.027 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:03:56.027 ************************************ 00:03:56.027 START TEST env 00:03:56.027 ************************************ 00:03:56.027 10:10:45 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.027 * Looking for test storage... 00:03:56.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.027 10:10:45 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.027 10:10:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.027 10:10:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.027 10:10:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.027 ************************************ 00:03:56.028 START TEST env_memory 00:03:56.028 ************************************ 00:03:56.028 10:10:45 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.028 00:03:56.028 00:03:56.028 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.028 http://cunit.sourceforge.net/ 00:03:56.028 00:03:56.028 00:03:56.028 Suite: memory 00:03:56.028 Test: alloc and free memory map ...[2024-07-25 10:10:45.732194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.028 passed 00:03:56.028 Test: mem map translation ...[2024-07-25 10:10:45.763554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.028 [2024-07-25 10:10:45.763582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.028 [2024-07-25 10:10:45.763636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.028 [2024-07-25 10:10:45.763651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.287 passed 00:03:56.287 Test: mem map registration ...[2024-07-25 10:10:45.824280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:56.287 [2024-07-25 10:10:45.824305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:56.287 passed 00:03:56.287 Test: mem map adjacent registrations ...passed 00:03:56.287 00:03:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.287 suites 1 1 n/a 0 0 00:03:56.287 tests 4 4 4 0 0 00:03:56.287 asserts 152 152 152 0 n/a 00:03:56.287 00:03:56.287 Elapsed time = 0.210 seconds 00:03:56.287 00:03:56.287 real 0m0.219s 00:03:56.287 user 0m0.211s 00:03:56.287 sys 0m0.007s 00:03:56.287 10:10:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.287 10:10:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.287 ************************************ 00:03:56.287 END TEST env_memory 00:03:56.287 ************************************ 00:03:56.287 10:10:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.287 10:10:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.287 10:10:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.287 10:10:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.287 ************************************ 00:03:56.287 START TEST env_vtophys 00:03:56.287 ************************************ 00:03:56.287 10:10:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.287 EAL: lib.eal log level changed from notice to debug 00:03:56.287 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.287 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.287 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.287 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.287 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.287 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.287 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.287 EAL: Detected lcore 7 as core 7 on socket 0 00:03:56.287 EAL: Detected lcore 8 as core 0 on socket 1 00:03:56.287 EAL: Detected lcore 9 as core 1 on socket 1 00:03:56.287 EAL: Detected lcore 10 as core 2 on socket 1 00:03:56.287 EAL: Detected lcore 11 as core 3 on socket 1 00:03:56.287 EAL: Detected lcore 12 as core 4 on socket 1 00:03:56.287 EAL: Detected lcore 13 as core 5 on socket 1 00:03:56.287 EAL: Detected lcore 14 as core 6 on socket 1 00:03:56.287 EAL: Detected lcore 15 as core 7 on socket 1 00:03:56.287 EAL: Detected lcore 16 as core 0 on socket 0 00:03:56.287 EAL: Detected lcore 17 as core 1 on socket 0 00:03:56.287 EAL: Detected lcore 18 as core 2 on socket 0 00:03:56.287 EAL: Detected lcore 19 as core 3 on socket 0 00:03:56.287 EAL: Detected lcore 20 as core 4 on socket 0 00:03:56.287 EAL: Detected lcore 21 as core 5 on socket 0 00:03:56.287 EAL: Detected lcore 22 as core 6 on socket 0 00:03:56.287 EAL: Detected lcore 23 as core 7 on socket 0 00:03:56.287 EAL: Detected lcore 24 as core 0 on socket 1 00:03:56.287 EAL: Detected lcore 25 as core 1 on socket 1 00:03:56.287 EAL: Detected lcore 26 as core 2 on socket 1 00:03:56.287 EAL: Detected lcore 27 as core 3 on socket 1 00:03:56.287 EAL: Detected lcore 28 as core 4 on socket 1 00:03:56.287 EAL: Detected lcore 29 as core 5 on socket 1 00:03:56.287 EAL: Detected lcore 30 as core 6 on socket 1 00:03:56.287 EAL: Detected lcore 31 as core 7 on socket 1 00:03:56.287 EAL: Maximum logical cores by configuration: 128 00:03:56.287 EAL: Detected CPU lcores: 32 00:03:56.287 EAL: Detected NUMA nodes: 2 00:03:56.287 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.287 EAL: Detected shared linkage of DPDK 00:03:56.287 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.287 EAL: Bus pci wants IOVA as 'DC' 00:03:56.287 EAL: Buses did not request a specific IOVA mode. 00:03:56.287 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.287 EAL: Selected IOVA mode 'VA' 00:03:56.287 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.287 EAL: Probing VFIO support... 00:03:56.287 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.287 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.287 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.287 EAL: VFIO support initialized 00:03:56.287 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.287 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.287 EAL: Setting up physically contiguous memory... 00:03:56.287 EAL: Setting maximum number of open files to 524288 00:03:56.287 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.287 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.287 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.287 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.287 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.287 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.287 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.287 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.287 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.287 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.287 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.287 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.287 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.287 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.287 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.287 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.287 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.287 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.287 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.288 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.288 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.288 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.288 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.288 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.288 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.288 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.288 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.288 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.288 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.288 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.288 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.288 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.288 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.288 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.288 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.288 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.288 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.288 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.288 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.288 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.288 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.288 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.288 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.288 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.288 EAL: Hugepages will be freed exactly as allocated. 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: TSC frequency is ~2700000 KHz 00:03:56.288 EAL: Main lcore 0 is ready (tid=7f673a934a00;cpuset=[0]) 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 0 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.288 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.288 00:03:56.288 00:03:56.288 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.288 http://cunit.sourceforge.net/ 00:03:56.288 00:03:56.288 00:03:56.288 Suite: components_suite 00:03:56.288 Test: vtophys_malloc_test ...passed 00:03:56.288 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 4 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 4 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 4 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 4 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.288 EAL: Restoring previous memory policy: 4 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.288 EAL: request: mp_malloc_sync 00:03:56.288 EAL: No shared files mode enabled, IPC is disabled 00:03:56.288 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.288 EAL: Trying to obtain current memory policy. 00:03:56.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.547 EAL: Restoring previous memory policy: 4 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.547 EAL: No shared files mode enabled, IPC is disabled 00:03:56.547 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.547 EAL: No shared files mode enabled, IPC is disabled 00:03:56.547 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.547 EAL: Trying to obtain current memory policy. 00:03:56.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.547 EAL: Restoring previous memory policy: 4 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.547 EAL: No shared files mode enabled, IPC is disabled 00:03:56.547 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.547 EAL: No shared files mode enabled, IPC is disabled 00:03:56.547 EAL: Heap on socket 0 was shrunk by 130MB 00:03:56.547 EAL: Trying to obtain current memory policy. 00:03:56.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.547 EAL: Restoring previous memory policy: 4 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.547 EAL: No shared files mode enabled, IPC is disabled 00:03:56.547 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.547 EAL: request: mp_malloc_sync 00:03:56.548 EAL: No shared files mode enabled, IPC is disabled 00:03:56.548 EAL: Heap on socket 0 was shrunk by 258MB 00:03:56.548 EAL: Trying to obtain current memory policy. 00:03:56.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.806 EAL: Restoring previous memory policy: 4 00:03:56.806 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.806 EAL: request: mp_malloc_sync 00:03:56.806 EAL: No shared files mode enabled, IPC is disabled 00:03:56.806 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.806 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.806 EAL: request: mp_malloc_sync 00:03:56.806 EAL: No shared files mode enabled, IPC is disabled 00:03:56.806 EAL: Heap on socket 0 was shrunk by 514MB 00:03:56.807 EAL: Trying to obtain current memory policy. 00:03:56.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.066 EAL: Restoring previous memory policy: 4 00:03:57.067 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.067 EAL: request: mp_malloc_sync 00:03:57.067 EAL: No shared files mode enabled, IPC is disabled 00:03:57.067 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.325 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.325 EAL: request: mp_malloc_sync 00:03:57.325 EAL: No shared files mode enabled, IPC is disabled 00:03:57.325 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.325 passed 00:03:57.325 00:03:57.325 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.325 suites 1 1 n/a 0 0 00:03:57.325 tests 2 2 2 0 0 00:03:57.325 asserts 497 497 497 0 n/a 00:03:57.325 00:03:57.325 Elapsed time = 0.952 seconds 00:03:57.325 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.325 EAL: request: mp_malloc_sync 00:03:57.325 EAL: No shared files mode enabled, IPC is disabled 00:03:57.325 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.325 EAL: No shared files mode enabled, IPC is disabled 00:03:57.325 EAL: No shared files mode enabled, IPC is disabled 00:03:57.325 EAL: No shared files mode enabled, IPC is disabled 00:03:57.325 00:03:57.325 real 0m1.071s 00:03:57.325 user 0m0.511s 00:03:57.325 sys 0m0.523s 00:03:57.325 10:10:47 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.325 10:10:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.325 ************************************ 00:03:57.325 END TEST env_vtophys 00:03:57.325 ************************************ 00:03:57.325 10:10:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.325 10:10:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.325 10:10:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.325 10:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.325 ************************************ 00:03:57.325 START TEST env_pci 00:03:57.325 ************************************ 00:03:57.325 10:10:47 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.325 00:03:57.325 00:03:57.325 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.325 http://cunit.sourceforge.net/ 00:03:57.325 00:03:57.325 00:03:57.325 Suite: pci 00:03:57.325 Test: pci_hook ...[2024-07-25 10:10:47.089386] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1404973 has claimed it 00:03:57.586 EAL: Cannot find device (10000:00:01.0) 00:03:57.586 EAL: Failed to attach device on primary process 00:03:57.586 passed 00:03:57.586 00:03:57.586 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.586 suites 1 1 n/a 0 0 00:03:57.586 tests 1 1 1 0 0 00:03:57.586 asserts 25 25 25 0 n/a 00:03:57.586 00:03:57.586 Elapsed time = 0.017 seconds 00:03:57.586 00:03:57.586 real 0m0.029s 00:03:57.586 user 0m0.012s 00:03:57.586 sys 0m0.017s 00:03:57.586 10:10:47 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.586 10:10:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.586 ************************************ 00:03:57.586 END TEST env_pci 00:03:57.586 ************************************ 00:03:57.586 10:10:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.586 10:10:47 env -- env/env.sh@15 -- # uname 00:03:57.586 10:10:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.586 10:10:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.586 10:10:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.586 10:10:47 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:57.586 10:10:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.586 10:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.586 ************************************ 00:03:57.586 START TEST env_dpdk_post_init 00:03:57.586 ************************************ 00:03:57.586 10:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.586 EAL: Detected CPU lcores: 32 00:03:57.586 EAL: Detected NUMA nodes: 2 00:03:57.586 EAL: Detected shared linkage of DPDK 00:03:57.586 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.586 EAL: Selected IOVA mode 'VA' 00:03:57.586 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.586 EAL: VFIO support initialized 00:03:57.586 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.586 EAL: Using IOMMU type 1 (Type 1) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:03:57.586 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:03:57.845 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:03:57.846 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:03:57.846 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:03:58.784 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:04:02.065 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:04:02.066 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:04:02.066 Starting DPDK initialization... 00:04:02.066 Starting SPDK post initialization... 00:04:02.066 SPDK NVMe probe 00:04:02.066 Attaching to 0000:84:00.0 00:04:02.066 Attached to 0000:84:00.0 00:04:02.066 Cleaning up... 00:04:02.066 00:04:02.066 real 0m4.364s 00:04:02.066 user 0m3.240s 00:04:02.066 sys 0m0.183s 00:04:02.066 10:10:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.066 10:10:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 ************************************ 00:04:02.066 END TEST env_dpdk_post_init 00:04:02.066 ************************************ 00:04:02.066 10:10:51 env -- env/env.sh@26 -- # uname 00:04:02.066 10:10:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.066 10:10:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.066 10:10:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.066 10:10:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.066 10:10:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 ************************************ 00:04:02.066 START TEST env_mem_callbacks 00:04:02.066 ************************************ 00:04:02.066 10:10:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.066 EAL: Detected CPU lcores: 32 00:04:02.066 EAL: Detected NUMA nodes: 2 00:04:02.066 EAL: Detected shared linkage of DPDK 00:04:02.066 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.066 EAL: Selected IOVA mode 'VA' 00:04:02.066 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.066 EAL: VFIO support initialized 00:04:02.066 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.066 00:04:02.066 00:04:02.066 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.066 http://cunit.sourceforge.net/ 00:04:02.066 00:04:02.066 00:04:02.066 Suite: memory 00:04:02.066 Test: test ... 00:04:02.066 register 0x200000200000 2097152 00:04:02.066 malloc 3145728 00:04:02.066 register 0x200000400000 4194304 00:04:02.066 buf 0x200000500000 len 3145728 PASSED 00:04:02.066 malloc 64 00:04:02.066 buf 0x2000004fff40 len 64 PASSED 00:04:02.066 malloc 4194304 00:04:02.066 register 0x200000800000 6291456 00:04:02.066 buf 0x200000a00000 len 4194304 PASSED 00:04:02.066 free 0x200000500000 3145728 00:04:02.066 free 0x2000004fff40 64 00:04:02.066 unregister 0x200000400000 4194304 PASSED 00:04:02.066 free 0x200000a00000 4194304 00:04:02.066 unregister 0x200000800000 6291456 PASSED 00:04:02.066 malloc 8388608 00:04:02.066 register 0x200000400000 10485760 00:04:02.066 buf 0x200000600000 len 8388608 PASSED 00:04:02.066 free 0x200000600000 8388608 00:04:02.066 unregister 0x200000400000 10485760 PASSED 00:04:02.066 passed 00:04:02.066 00:04:02.066 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.066 suites 1 1 n/a 0 0 00:04:02.066 tests 1 1 1 0 0 00:04:02.066 asserts 15 15 15 0 n/a 00:04:02.066 00:04:02.066 Elapsed time = 0.005 seconds 00:04:02.066 00:04:02.066 real 0m0.046s 00:04:02.066 user 0m0.012s 00:04:02.066 sys 0m0.034s 00:04:02.066 10:10:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.066 10:10:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 ************************************ 00:04:02.066 END TEST env_mem_callbacks 00:04:02.066 ************************************ 00:04:02.066 00:04:02.066 real 0m6.060s 00:04:02.066 user 0m4.104s 00:04:02.066 sys 0m0.992s 00:04:02.066 10:10:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.066 10:10:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 ************************************ 00:04:02.066 END TEST env 00:04:02.066 ************************************ 00:04:02.066 10:10:51 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.066 10:10:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.066 10:10:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.066 10:10:51 -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 ************************************ 00:04:02.066 START TEST rpc 00:04:02.066 ************************************ 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.066 * Looking for test storage... 00:04:02.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.066 10:10:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1405496 00:04:02.066 10:10:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:02.066 10:10:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.066 10:10:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1405496 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 1405496 ']' 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.066 10:10:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.066 [2024-07-25 10:10:51.828368] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:02.066 [2024-07-25 10:10:51.828470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405496 ] 00:04:02.325 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.325 [2024-07-25 10:10:51.888237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.325 [2024-07-25 10:10:52.005114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.325 [2024-07-25 10:10:52.005177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1405496' to capture a snapshot of events at runtime. 00:04:02.325 [2024-07-25 10:10:52.005193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.325 [2024-07-25 10:10:52.005206] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.325 [2024-07-25 10:10:52.005218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1405496 for offline analysis/debug. 00:04:02.325 [2024-07-25 10:10:52.005258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.581 10:10:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.581 10:10:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:02.581 10:10:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.582 10:10:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.582 10:10:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.582 10:10:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.582 10:10:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.582 10:10:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.582 10:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.582 ************************************ 00:04:02.582 START TEST rpc_integrity 00:04:02.582 ************************************ 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.582 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.582 { 00:04:02.582 "name": "Malloc0", 00:04:02.582 "aliases": [ 00:04:02.582 "7d637737-6903-4719-ae0f-7b477ce2cc8b" 00:04:02.582 ], 00:04:02.582 "product_name": "Malloc disk", 00:04:02.582 "block_size": 512, 00:04:02.582 "num_blocks": 16384, 00:04:02.582 "uuid": "7d637737-6903-4719-ae0f-7b477ce2cc8b", 00:04:02.582 "assigned_rate_limits": { 00:04:02.582 "rw_ios_per_sec": 0, 00:04:02.582 "rw_mbytes_per_sec": 0, 00:04:02.582 "r_mbytes_per_sec": 0, 00:04:02.582 "w_mbytes_per_sec": 0 00:04:02.582 }, 00:04:02.582 "claimed": false, 00:04:02.582 "zoned": false, 00:04:02.582 "supported_io_types": { 00:04:02.582 "read": true, 00:04:02.582 "write": true, 00:04:02.582 "unmap": true, 00:04:02.582 "flush": true, 00:04:02.582 "reset": true, 00:04:02.582 "nvme_admin": false, 00:04:02.582 "nvme_io": false, 00:04:02.582 "nvme_io_md": false, 00:04:02.582 "write_zeroes": true, 00:04:02.582 "zcopy": true, 00:04:02.582 "get_zone_info": false, 00:04:02.582 "zone_management": false, 00:04:02.582 "zone_append": false, 00:04:02.582 "compare": false, 00:04:02.582 "compare_and_write": false, 00:04:02.582 "abort": true, 00:04:02.582 "seek_hole": false, 00:04:02.582 "seek_data": false, 00:04:02.582 "copy": true, 00:04:02.582 "nvme_iov_md": false 00:04:02.582 }, 00:04:02.582 "memory_domains": [ 00:04:02.582 { 00:04:02.582 "dma_device_id": "system", 00:04:02.582 "dma_device_type": 1 00:04:02.582 }, 00:04:02.582 { 00:04:02.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.582 "dma_device_type": 2 00:04:02.582 } 00:04:02.582 ], 00:04:02.582 "driver_specific": {} 00:04:02.582 } 00:04:02.582 ]' 00:04:02.582 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 [2024-07-25 10:10:52.371617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.840 [2024-07-25 10:10:52.371663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.840 [2024-07-25 10:10:52.371686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x793380 00:04:02.840 [2024-07-25 10:10:52.371700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.840 [2024-07-25 10:10:52.373249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.840 [2024-07-25 10:10:52.373276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.840 Passthru0 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.840 { 00:04:02.840 "name": "Malloc0", 00:04:02.840 "aliases": [ 00:04:02.840 "7d637737-6903-4719-ae0f-7b477ce2cc8b" 00:04:02.840 ], 00:04:02.840 "product_name": "Malloc disk", 00:04:02.840 "block_size": 512, 00:04:02.840 "num_blocks": 16384, 00:04:02.840 "uuid": "7d637737-6903-4719-ae0f-7b477ce2cc8b", 00:04:02.840 "assigned_rate_limits": { 00:04:02.840 "rw_ios_per_sec": 0, 00:04:02.840 "rw_mbytes_per_sec": 0, 00:04:02.840 "r_mbytes_per_sec": 0, 00:04:02.840 "w_mbytes_per_sec": 0 00:04:02.840 }, 00:04:02.840 "claimed": true, 00:04:02.840 "claim_type": "exclusive_write", 00:04:02.840 "zoned": false, 00:04:02.840 "supported_io_types": { 00:04:02.840 "read": true, 00:04:02.840 "write": true, 00:04:02.840 "unmap": true, 00:04:02.840 "flush": true, 00:04:02.840 "reset": true, 00:04:02.840 "nvme_admin": false, 00:04:02.840 "nvme_io": false, 00:04:02.840 "nvme_io_md": false, 00:04:02.840 "write_zeroes": true, 00:04:02.840 "zcopy": true, 00:04:02.840 "get_zone_info": false, 00:04:02.840 "zone_management": false, 00:04:02.840 "zone_append": false, 00:04:02.840 "compare": false, 00:04:02.840 "compare_and_write": false, 00:04:02.840 "abort": true, 00:04:02.840 "seek_hole": false, 00:04:02.840 "seek_data": false, 00:04:02.840 "copy": true, 00:04:02.840 "nvme_iov_md": false 00:04:02.840 }, 00:04:02.840 "memory_domains": [ 00:04:02.840 { 00:04:02.840 "dma_device_id": "system", 00:04:02.840 "dma_device_type": 1 00:04:02.840 }, 00:04:02.840 { 00:04:02.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.840 "dma_device_type": 2 00:04:02.840 } 00:04:02.840 ], 00:04:02.840 "driver_specific": {} 00:04:02.840 }, 00:04:02.840 { 00:04:02.840 "name": "Passthru0", 00:04:02.840 "aliases": [ 00:04:02.840 "29b9a9b1-6a46-5edb-b0a4-50955ddb9af4" 00:04:02.840 ], 00:04:02.840 "product_name": "passthru", 00:04:02.840 "block_size": 512, 00:04:02.840 "num_blocks": 16384, 00:04:02.840 "uuid": "29b9a9b1-6a46-5edb-b0a4-50955ddb9af4", 00:04:02.840 "assigned_rate_limits": { 00:04:02.840 "rw_ios_per_sec": 0, 00:04:02.840 "rw_mbytes_per_sec": 0, 00:04:02.840 "r_mbytes_per_sec": 0, 00:04:02.840 "w_mbytes_per_sec": 0 00:04:02.840 }, 00:04:02.840 "claimed": false, 00:04:02.840 "zoned": false, 00:04:02.840 "supported_io_types": { 00:04:02.840 "read": true, 00:04:02.840 "write": true, 00:04:02.840 "unmap": true, 00:04:02.840 "flush": true, 00:04:02.840 "reset": true, 00:04:02.840 "nvme_admin": false, 00:04:02.840 "nvme_io": false, 00:04:02.840 "nvme_io_md": false, 00:04:02.840 "write_zeroes": true, 00:04:02.840 "zcopy": true, 00:04:02.840 "get_zone_info": false, 00:04:02.840 "zone_management": false, 00:04:02.840 "zone_append": false, 00:04:02.840 "compare": false, 00:04:02.840 "compare_and_write": false, 00:04:02.840 "abort": true, 00:04:02.840 "seek_hole": false, 00:04:02.840 "seek_data": false, 00:04:02.840 "copy": true, 00:04:02.840 "nvme_iov_md": false 00:04:02.840 }, 00:04:02.840 "memory_domains": [ 00:04:02.840 { 00:04:02.840 "dma_device_id": "system", 00:04:02.840 "dma_device_type": 1 00:04:02.840 }, 00:04:02.840 { 00:04:02.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.840 "dma_device_type": 2 00:04:02.840 } 00:04:02.840 ], 00:04:02.840 "driver_specific": { 00:04:02.840 "passthru": { 00:04:02.840 "name": "Passthru0", 00:04:02.840 "base_bdev_name": "Malloc0" 00:04:02.840 } 00:04:02.840 } 00:04:02.840 } 00:04:02.840 ]' 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.840 10:10:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.840 00:04:02.840 real 0m0.251s 00:04:02.840 user 0m0.158s 00:04:02.840 sys 0m0.031s 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 ************************************ 00:04:02.840 END TEST rpc_integrity 00:04:02.840 ************************************ 00:04:02.840 10:10:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.840 10:10:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.840 10:10:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.840 10:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 ************************************ 00:04:02.840 START TEST rpc_plugins 00:04:02.840 ************************************ 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:02.840 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:02.840 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.840 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.840 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:02.840 { 00:04:02.840 "name": "Malloc1", 00:04:02.840 "aliases": [ 00:04:02.840 "fed66311-44bc-428f-a80c-00c7fc1bf4ef" 00:04:02.840 ], 00:04:02.840 "product_name": "Malloc disk", 00:04:02.840 "block_size": 4096, 00:04:02.840 "num_blocks": 256, 00:04:02.840 "uuid": "fed66311-44bc-428f-a80c-00c7fc1bf4ef", 00:04:02.840 "assigned_rate_limits": { 00:04:02.840 "rw_ios_per_sec": 0, 00:04:02.840 "rw_mbytes_per_sec": 0, 00:04:02.840 "r_mbytes_per_sec": 0, 00:04:02.840 "w_mbytes_per_sec": 0 00:04:02.840 }, 00:04:02.840 "claimed": false, 00:04:02.840 "zoned": false, 00:04:02.840 "supported_io_types": { 00:04:02.840 "read": true, 00:04:02.840 "write": true, 00:04:02.840 "unmap": true, 00:04:02.840 "flush": true, 00:04:02.840 "reset": true, 00:04:02.840 "nvme_admin": false, 00:04:02.840 "nvme_io": false, 00:04:02.840 "nvme_io_md": false, 00:04:02.840 "write_zeroes": true, 00:04:02.840 "zcopy": true, 00:04:02.840 "get_zone_info": false, 00:04:02.840 "zone_management": false, 00:04:02.840 "zone_append": false, 00:04:02.840 "compare": false, 00:04:02.840 "compare_and_write": false, 00:04:02.840 "abort": true, 00:04:02.840 "seek_hole": false, 00:04:02.840 "seek_data": false, 00:04:02.840 "copy": true, 00:04:02.840 "nvme_iov_md": false 00:04:02.840 }, 00:04:02.840 "memory_domains": [ 00:04:02.840 { 00:04:02.840 "dma_device_id": "system", 00:04:02.840 "dma_device_type": 1 00:04:02.840 }, 00:04:02.840 { 00:04:02.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.840 "dma_device_type": 2 00:04:02.840 } 00:04:02.840 ], 00:04:02.840 "driver_specific": {} 00:04:02.840 } 00:04:02.840 ]' 00:04:02.840 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.099 10:10:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.099 00:04:03.099 real 0m0.132s 00:04:03.099 user 0m0.082s 00:04:03.099 sys 0m0.013s 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.099 10:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.099 ************************************ 00:04:03.099 END TEST rpc_plugins 00:04:03.099 ************************************ 00:04:03.099 10:10:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.099 10:10:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.099 10:10:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.099 10:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.099 ************************************ 00:04:03.099 START TEST rpc_trace_cmd_test 00:04:03.099 ************************************ 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.099 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1405496", 00:04:03.099 "tpoint_group_mask": "0x8", 00:04:03.099 "iscsi_conn": { 00:04:03.099 "mask": "0x2", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "scsi": { 00:04:03.099 "mask": "0x4", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "bdev": { 00:04:03.099 "mask": "0x8", 00:04:03.099 "tpoint_mask": "0xffffffffffffffff" 00:04:03.099 }, 00:04:03.099 "nvmf_rdma": { 00:04:03.099 "mask": "0x10", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "nvmf_tcp": { 00:04:03.099 "mask": "0x20", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "ftl": { 00:04:03.099 "mask": "0x40", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "blobfs": { 00:04:03.099 "mask": "0x80", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "dsa": { 00:04:03.099 "mask": "0x200", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "thread": { 00:04:03.099 "mask": "0x400", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "nvme_pcie": { 00:04:03.099 "mask": "0x800", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "iaa": { 00:04:03.099 "mask": "0x1000", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "nvme_tcp": { 00:04:03.099 "mask": "0x2000", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "bdev_nvme": { 00:04:03.099 "mask": "0x4000", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 }, 00:04:03.099 "sock": { 00:04:03.099 "mask": "0x8000", 00:04:03.099 "tpoint_mask": "0x0" 00:04:03.099 } 00:04:03.099 }' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.099 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.358 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.358 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.358 10:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.358 00:04:03.358 real 0m0.213s 00:04:03.358 user 0m0.184s 00:04:03.358 sys 0m0.021s 00:04:03.358 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.358 10:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 ************************************ 00:04:03.358 END TEST rpc_trace_cmd_test 00:04:03.358 ************************************ 00:04:03.358 10:10:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.358 10:10:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.358 10:10:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.358 10:10:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.358 10:10:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.358 10:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 ************************************ 00:04:03.358 START TEST rpc_daemon_integrity 00:04:03.358 ************************************ 00:04:03.358 10:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:03.358 10:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.358 10:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.358 10:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.358 { 00:04:03.358 "name": "Malloc2", 00:04:03.358 "aliases": [ 00:04:03.358 "2ba1f027-c5d4-40b5-9e63-770a9bae74cc" 00:04:03.358 ], 00:04:03.358 "product_name": "Malloc disk", 00:04:03.358 "block_size": 512, 00:04:03.358 "num_blocks": 16384, 00:04:03.358 "uuid": "2ba1f027-c5d4-40b5-9e63-770a9bae74cc", 00:04:03.358 "assigned_rate_limits": { 00:04:03.358 "rw_ios_per_sec": 0, 00:04:03.358 "rw_mbytes_per_sec": 0, 00:04:03.358 "r_mbytes_per_sec": 0, 00:04:03.358 "w_mbytes_per_sec": 0 00:04:03.358 }, 00:04:03.358 "claimed": false, 00:04:03.358 "zoned": false, 00:04:03.358 "supported_io_types": { 00:04:03.358 "read": true, 00:04:03.358 "write": true, 00:04:03.358 "unmap": true, 00:04:03.358 "flush": true, 00:04:03.358 "reset": true, 00:04:03.358 "nvme_admin": false, 00:04:03.358 "nvme_io": false, 00:04:03.358 "nvme_io_md": false, 00:04:03.358 "write_zeroes": true, 00:04:03.358 "zcopy": true, 00:04:03.358 "get_zone_info": false, 00:04:03.358 "zone_management": false, 00:04:03.358 "zone_append": false, 00:04:03.358 "compare": false, 00:04:03.358 "compare_and_write": false, 00:04:03.358 "abort": true, 00:04:03.358 "seek_hole": false, 00:04:03.358 "seek_data": false, 00:04:03.358 "copy": true, 00:04:03.358 "nvme_iov_md": false 00:04:03.358 }, 00:04:03.358 "memory_domains": [ 00:04:03.358 { 00:04:03.358 "dma_device_id": "system", 00:04:03.358 "dma_device_type": 1 00:04:03.358 }, 00:04:03.358 { 00:04:03.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.358 "dma_device_type": 2 00:04:03.358 } 00:04:03.358 ], 00:04:03.358 "driver_specific": {} 00:04:03.358 } 00:04:03.358 ]' 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.358 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.358 [2024-07-25 10:10:53.117861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.359 [2024-07-25 10:10:53.117908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.359 [2024-07-25 10:10:53.117937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5e10c0 00:04:03.359 [2024-07-25 10:10:53.117952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.359 [2024-07-25 10:10:53.119415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.359 [2024-07-25 10:10:53.119443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.359 Passthru0 00:04:03.359 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.359 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.359 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.359 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.618 { 00:04:03.618 "name": "Malloc2", 00:04:03.618 "aliases": [ 00:04:03.618 "2ba1f027-c5d4-40b5-9e63-770a9bae74cc" 00:04:03.618 ], 00:04:03.618 "product_name": "Malloc disk", 00:04:03.618 "block_size": 512, 00:04:03.618 "num_blocks": 16384, 00:04:03.618 "uuid": "2ba1f027-c5d4-40b5-9e63-770a9bae74cc", 00:04:03.618 "assigned_rate_limits": { 00:04:03.618 "rw_ios_per_sec": 0, 00:04:03.618 "rw_mbytes_per_sec": 0, 00:04:03.618 "r_mbytes_per_sec": 0, 00:04:03.618 "w_mbytes_per_sec": 0 00:04:03.618 }, 00:04:03.618 "claimed": true, 00:04:03.618 "claim_type": "exclusive_write", 00:04:03.618 "zoned": false, 00:04:03.618 "supported_io_types": { 00:04:03.618 "read": true, 00:04:03.618 "write": true, 00:04:03.618 "unmap": true, 00:04:03.618 "flush": true, 00:04:03.618 "reset": true, 00:04:03.618 "nvme_admin": false, 00:04:03.618 "nvme_io": false, 00:04:03.618 "nvme_io_md": false, 00:04:03.618 "write_zeroes": true, 00:04:03.618 "zcopy": true, 00:04:03.618 "get_zone_info": false, 00:04:03.618 "zone_management": false, 00:04:03.618 "zone_append": false, 00:04:03.618 "compare": false, 00:04:03.618 "compare_and_write": false, 00:04:03.618 "abort": true, 00:04:03.618 "seek_hole": false, 00:04:03.618 "seek_data": false, 00:04:03.618 "copy": true, 00:04:03.618 "nvme_iov_md": false 00:04:03.618 }, 00:04:03.618 "memory_domains": [ 00:04:03.618 { 00:04:03.618 "dma_device_id": "system", 00:04:03.618 "dma_device_type": 1 00:04:03.618 }, 00:04:03.618 { 00:04:03.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.618 "dma_device_type": 2 00:04:03.618 } 00:04:03.618 ], 00:04:03.618 "driver_specific": {} 00:04:03.618 }, 00:04:03.618 { 00:04:03.618 "name": "Passthru0", 00:04:03.618 "aliases": [ 00:04:03.618 "cb4a8eed-c732-5bf3-85f8-cd1d5fc33f85" 00:04:03.618 ], 00:04:03.618 "product_name": "passthru", 00:04:03.618 "block_size": 512, 00:04:03.618 "num_blocks": 16384, 00:04:03.618 "uuid": "cb4a8eed-c732-5bf3-85f8-cd1d5fc33f85", 00:04:03.618 "assigned_rate_limits": { 00:04:03.618 "rw_ios_per_sec": 0, 00:04:03.618 "rw_mbytes_per_sec": 0, 00:04:03.618 "r_mbytes_per_sec": 0, 00:04:03.618 "w_mbytes_per_sec": 0 00:04:03.618 }, 00:04:03.618 "claimed": false, 00:04:03.618 "zoned": false, 00:04:03.618 "supported_io_types": { 00:04:03.618 "read": true, 00:04:03.618 "write": true, 00:04:03.618 "unmap": true, 00:04:03.618 "flush": true, 00:04:03.618 "reset": true, 00:04:03.618 "nvme_admin": false, 00:04:03.618 "nvme_io": false, 00:04:03.618 "nvme_io_md": false, 00:04:03.618 "write_zeroes": true, 00:04:03.618 "zcopy": true, 00:04:03.618 "get_zone_info": false, 00:04:03.618 "zone_management": false, 00:04:03.618 "zone_append": false, 00:04:03.618 "compare": false, 00:04:03.618 "compare_and_write": false, 00:04:03.618 "abort": true, 00:04:03.618 "seek_hole": false, 00:04:03.618 "seek_data": false, 00:04:03.618 "copy": true, 00:04:03.618 "nvme_iov_md": false 00:04:03.618 }, 00:04:03.618 "memory_domains": [ 00:04:03.618 { 00:04:03.618 "dma_device_id": "system", 00:04:03.618 "dma_device_type": 1 00:04:03.618 }, 00:04:03.618 { 00:04:03.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.618 "dma_device_type": 2 00:04:03.618 } 00:04:03.618 ], 00:04:03.618 "driver_specific": { 00:04:03.618 "passthru": { 00:04:03.618 "name": "Passthru0", 00:04:03.618 "base_bdev_name": "Malloc2" 00:04:03.618 } 00:04:03.618 } 00:04:03.618 } 00:04:03.618 ]' 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.618 00:04:03.618 real 0m0.255s 00:04:03.618 user 0m0.162s 00:04:03.618 sys 0m0.025s 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.618 10:10:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.618 ************************************ 00:04:03.618 END TEST rpc_daemon_integrity 00:04:03.618 ************************************ 00:04:03.618 10:10:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.618 10:10:53 rpc -- rpc/rpc.sh@84 -- # killprocess 1405496 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 1405496 ']' 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@954 -- # kill -0 1405496 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@955 -- # uname 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405496 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405496' 00:04:03.618 killing process with pid 1405496 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@969 -- # kill 1405496 00:04:03.618 10:10:53 rpc -- common/autotest_common.sh@974 -- # wait 1405496 00:04:03.881 00:04:03.881 real 0m1.896s 00:04:03.881 user 0m2.471s 00:04:03.881 sys 0m0.599s 00:04:03.881 10:10:53 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.881 10:10:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.881 ************************************ 00:04:03.881 END TEST rpc 00:04:03.881 ************************************ 00:04:03.881 10:10:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:03.881 10:10:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.881 10:10:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.881 10:10:53 -- common/autotest_common.sh@10 -- # set +x 00:04:04.140 ************************************ 00:04:04.140 START TEST skip_rpc 00:04:04.140 ************************************ 00:04:04.140 10:10:53 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:04.140 * Looking for test storage... 00:04:04.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.140 10:10:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.140 10:10:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.140 10:10:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.140 10:10:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.140 10:10:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.140 10:10:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.140 ************************************ 00:04:04.140 START TEST skip_rpc 00:04:04.140 ************************************ 00:04:04.140 10:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:04.140 10:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1405867 00:04:04.140 10:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.140 10:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.140 10:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.140 [2024-07-25 10:10:53.810163] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:04.140 [2024-07-25 10:10:53.810268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405867 ] 00:04:04.140 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.140 [2024-07-25 10:10:53.869520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.397 [2024-07-25 10:10:53.986530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1405867 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1405867 ']' 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1405867 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405867 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405867' 00:04:09.659 killing process with pid 1405867 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1405867 00:04:09.659 10:10:58 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1405867 00:04:09.659 00:04:09.659 real 0m5.359s 00:04:09.659 user 0m5.074s 00:04:09.659 sys 0m0.283s 00:04:09.659 10:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.659 10:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.659 ************************************ 00:04:09.659 END TEST skip_rpc 00:04:09.659 ************************************ 00:04:09.659 10:10:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:09.659 10:10:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.659 10:10:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.660 10:10:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.660 ************************************ 00:04:09.660 START TEST skip_rpc_with_json 00:04:09.660 ************************************ 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1406400 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1406400 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1406400 ']' 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:09.660 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.660 [2024-07-25 10:10:59.230114] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:09.660 [2024-07-25 10:10:59.230211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406400 ] 00:04:09.660 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.660 [2024-07-25 10:10:59.306116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.918 [2024-07-25 10:10:59.462426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.190 [2024-07-25 10:10:59.712180] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:10.190 request: 00:04:10.190 { 00:04:10.190 "trtype": "tcp", 00:04:10.190 "method": "nvmf_get_transports", 00:04:10.190 "req_id": 1 00:04:10.190 } 00:04:10.190 Got JSON-RPC error response 00:04:10.190 response: 00:04:10.190 { 00:04:10.190 "code": -19, 00:04:10.190 "message": "No such device" 00:04:10.190 } 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.190 [2024-07-25 10:10:59.720296] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.190 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.190 { 00:04:10.190 "subsystems": [ 00:04:10.190 { 00:04:10.190 "subsystem": "vfio_user_target", 00:04:10.190 "config": null 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "keyring", 00:04:10.190 "config": [] 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "iobuf", 00:04:10.190 "config": [ 00:04:10.190 { 00:04:10.190 "method": "iobuf_set_options", 00:04:10.190 "params": { 00:04:10.190 "small_pool_count": 8192, 00:04:10.190 "large_pool_count": 1024, 00:04:10.190 "small_bufsize": 8192, 00:04:10.190 "large_bufsize": 135168 00:04:10.190 } 00:04:10.190 } 00:04:10.190 ] 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "sock", 00:04:10.190 "config": [ 00:04:10.190 { 00:04:10.190 "method": "sock_set_default_impl", 00:04:10.190 "params": { 00:04:10.190 "impl_name": "posix" 00:04:10.190 } 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "method": "sock_impl_set_options", 00:04:10.190 "params": { 00:04:10.190 "impl_name": "ssl", 00:04:10.190 "recv_buf_size": 4096, 00:04:10.190 "send_buf_size": 4096, 00:04:10.190 "enable_recv_pipe": true, 00:04:10.190 "enable_quickack": false, 00:04:10.190 "enable_placement_id": 0, 00:04:10.190 "enable_zerocopy_send_server": true, 00:04:10.190 "enable_zerocopy_send_client": false, 00:04:10.190 "zerocopy_threshold": 0, 00:04:10.190 "tls_version": 0, 00:04:10.190 "enable_ktls": false 00:04:10.190 } 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "method": "sock_impl_set_options", 00:04:10.190 "params": { 00:04:10.190 "impl_name": "posix", 00:04:10.190 "recv_buf_size": 2097152, 00:04:10.190 "send_buf_size": 2097152, 00:04:10.190 "enable_recv_pipe": true, 00:04:10.190 "enable_quickack": false, 00:04:10.190 "enable_placement_id": 0, 00:04:10.190 "enable_zerocopy_send_server": true, 00:04:10.190 "enable_zerocopy_send_client": false, 00:04:10.190 "zerocopy_threshold": 0, 00:04:10.190 "tls_version": 0, 00:04:10.190 "enable_ktls": false 00:04:10.190 } 00:04:10.190 } 00:04:10.190 ] 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "vmd", 00:04:10.190 "config": [] 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "accel", 00:04:10.190 "config": [ 00:04:10.190 { 00:04:10.190 "method": "accel_set_options", 00:04:10.190 "params": { 00:04:10.190 "small_cache_size": 128, 00:04:10.190 "large_cache_size": 16, 00:04:10.190 "task_count": 2048, 00:04:10.190 "sequence_count": 2048, 00:04:10.190 "buf_count": 2048 00:04:10.190 } 00:04:10.190 } 00:04:10.190 ] 00:04:10.190 }, 00:04:10.190 { 00:04:10.190 "subsystem": "bdev", 00:04:10.190 "config": [ 00:04:10.190 { 00:04:10.190 "method": "bdev_set_options", 00:04:10.190 "params": { 00:04:10.190 "bdev_io_pool_size": 65535, 00:04:10.190 "bdev_io_cache_size": 256, 00:04:10.191 "bdev_auto_examine": true, 00:04:10.191 "iobuf_small_cache_size": 128, 00:04:10.191 "iobuf_large_cache_size": 16 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "bdev_raid_set_options", 00:04:10.191 "params": { 00:04:10.191 "process_window_size_kb": 1024, 00:04:10.191 "process_max_bandwidth_mb_sec": 0 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "bdev_iscsi_set_options", 00:04:10.191 "params": { 00:04:10.191 "timeout_sec": 30 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "bdev_nvme_set_options", 00:04:10.191 "params": { 00:04:10.191 "action_on_timeout": "none", 00:04:10.191 "timeout_us": 0, 00:04:10.191 "timeout_admin_us": 0, 00:04:10.191 "keep_alive_timeout_ms": 10000, 00:04:10.191 "arbitration_burst": 0, 00:04:10.191 "low_priority_weight": 0, 00:04:10.191 "medium_priority_weight": 0, 00:04:10.191 "high_priority_weight": 0, 00:04:10.191 "nvme_adminq_poll_period_us": 10000, 00:04:10.191 "nvme_ioq_poll_period_us": 0, 00:04:10.191 "io_queue_requests": 0, 00:04:10.191 "delay_cmd_submit": true, 00:04:10.191 "transport_retry_count": 4, 00:04:10.191 "bdev_retry_count": 3, 00:04:10.191 "transport_ack_timeout": 0, 00:04:10.191 "ctrlr_loss_timeout_sec": 0, 00:04:10.191 "reconnect_delay_sec": 0, 00:04:10.191 "fast_io_fail_timeout_sec": 0, 00:04:10.191 "disable_auto_failback": false, 00:04:10.191 "generate_uuids": false, 00:04:10.191 "transport_tos": 0, 00:04:10.191 "nvme_error_stat": false, 00:04:10.191 "rdma_srq_size": 0, 00:04:10.191 "io_path_stat": false, 00:04:10.191 "allow_accel_sequence": false, 00:04:10.191 "rdma_max_cq_size": 0, 00:04:10.191 "rdma_cm_event_timeout_ms": 0, 00:04:10.191 "dhchap_digests": [ 00:04:10.191 "sha256", 00:04:10.191 "sha384", 00:04:10.191 "sha512" 00:04:10.191 ], 00:04:10.191 "dhchap_dhgroups": [ 00:04:10.191 "null", 00:04:10.191 "ffdhe2048", 00:04:10.191 "ffdhe3072", 00:04:10.191 "ffdhe4096", 00:04:10.191 "ffdhe6144", 00:04:10.191 "ffdhe8192" 00:04:10.191 ] 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "bdev_nvme_set_hotplug", 00:04:10.191 "params": { 00:04:10.191 "period_us": 100000, 00:04:10.191 "enable": false 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "bdev_wait_for_examine" 00:04:10.191 } 00:04:10.191 ] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "scsi", 00:04:10.191 "config": null 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "scheduler", 00:04:10.191 "config": [ 00:04:10.191 { 00:04:10.191 "method": "framework_set_scheduler", 00:04:10.191 "params": { 00:04:10.191 "name": "static" 00:04:10.191 } 00:04:10.191 } 00:04:10.191 ] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "vhost_scsi", 00:04:10.191 "config": [] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "vhost_blk", 00:04:10.191 "config": [] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "ublk", 00:04:10.191 "config": [] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "nbd", 00:04:10.191 "config": [] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "nvmf", 00:04:10.191 "config": [ 00:04:10.191 { 00:04:10.191 "method": "nvmf_set_config", 00:04:10.191 "params": { 00:04:10.191 "discovery_filter": "match_any", 00:04:10.191 "admin_cmd_passthru": { 00:04:10.191 "identify_ctrlr": false 00:04:10.191 } 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "nvmf_set_max_subsystems", 00:04:10.191 "params": { 00:04:10.191 "max_subsystems": 1024 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "nvmf_set_crdt", 00:04:10.191 "params": { 00:04:10.191 "crdt1": 0, 00:04:10.191 "crdt2": 0, 00:04:10.191 "crdt3": 0 00:04:10.191 } 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "method": "nvmf_create_transport", 00:04:10.191 "params": { 00:04:10.191 "trtype": "TCP", 00:04:10.191 "max_queue_depth": 128, 00:04:10.191 "max_io_qpairs_per_ctrlr": 127, 00:04:10.191 "in_capsule_data_size": 4096, 00:04:10.191 "max_io_size": 131072, 00:04:10.191 "io_unit_size": 131072, 00:04:10.191 "max_aq_depth": 128, 00:04:10.191 "num_shared_buffers": 511, 00:04:10.191 "buf_cache_size": 4294967295, 00:04:10.191 "dif_insert_or_strip": false, 00:04:10.191 "zcopy": false, 00:04:10.191 "c2h_success": true, 00:04:10.191 "sock_priority": 0, 00:04:10.191 "abort_timeout_sec": 1, 00:04:10.191 "ack_timeout": 0, 00:04:10.191 "data_wr_pool_size": 0 00:04:10.191 } 00:04:10.191 } 00:04:10.191 ] 00:04:10.191 }, 00:04:10.191 { 00:04:10.191 "subsystem": "iscsi", 00:04:10.191 "config": [ 00:04:10.191 { 00:04:10.191 "method": "iscsi_set_options", 00:04:10.191 "params": { 00:04:10.191 "node_base": "iqn.2016-06.io.spdk", 00:04:10.191 "max_sessions": 128, 00:04:10.191 "max_connections_per_session": 2, 00:04:10.191 "max_queue_depth": 64, 00:04:10.191 "default_time2wait": 2, 00:04:10.191 "default_time2retain": 20, 00:04:10.191 "first_burst_length": 8192, 00:04:10.191 "immediate_data": true, 00:04:10.191 "allow_duplicated_isid": false, 00:04:10.191 "error_recovery_level": 0, 00:04:10.191 "nop_timeout": 60, 00:04:10.191 "nop_in_interval": 30, 00:04:10.191 "disable_chap": false, 00:04:10.191 "require_chap": false, 00:04:10.191 "mutual_chap": false, 00:04:10.191 "chap_group": 0, 00:04:10.191 "max_large_datain_per_connection": 64, 00:04:10.191 "max_r2t_per_connection": 4, 00:04:10.191 "pdu_pool_size": 36864, 00:04:10.191 "immediate_data_pool_size": 16384, 00:04:10.191 "data_out_pool_size": 2048 00:04:10.191 } 00:04:10.191 } 00:04:10.191 ] 00:04:10.191 } 00:04:10.191 ] 00:04:10.191 } 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1406400 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1406400 ']' 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1406400 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406400 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406400' 00:04:10.191 killing process with pid 1406400 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1406400 00:04:10.191 10:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1406400 00:04:10.760 10:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1406508 00:04:10.760 10:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.760 10:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1406508 ']' 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406508' 00:04:16.024 killing process with pid 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1406508 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.024 00:04:16.024 real 0m6.428s 00:04:16.024 user 0m6.196s 00:04:16.024 sys 0m0.647s 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.024 10:11:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.024 ************************************ 00:04:16.024 END TEST skip_rpc_with_json 00:04:16.024 ************************************ 00:04:16.024 10:11:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:16.024 10:11:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.024 10:11:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.025 10:11:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.025 ************************************ 00:04:16.025 START TEST skip_rpc_with_delay 00:04:16.025 ************************************ 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.025 [2024-07-25 10:11:05.712844] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:16.025 [2024-07-25 10:11:05.712989] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.025 00:04:16.025 real 0m0.079s 00:04:16.025 user 0m0.044s 00:04:16.025 sys 0m0.034s 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.025 10:11:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:16.025 ************************************ 00:04:16.025 END TEST skip_rpc_with_delay 00:04:16.025 ************************************ 00:04:16.025 10:11:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:16.025 10:11:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:16.025 10:11:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:16.025 10:11:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.025 10:11:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.025 10:11:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.025 ************************************ 00:04:16.025 START TEST exit_on_failed_rpc_init 00:04:16.025 ************************************ 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1407055 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1407055 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1407055 ']' 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.025 10:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.284 [2024-07-25 10:11:05.840633] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:16.284 [2024-07-25 10:11:05.840732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407055 ] 00:04:16.284 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.284 [2024-07-25 10:11:05.900065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.284 [2024-07-25 10:11:06.017248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.542 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.543 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:16.543 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.543 [2024-07-25 10:11:06.308336] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:16.543 [2024-07-25 10:11:06.308426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407072 ] 00:04:16.801 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.801 [2024-07-25 10:11:06.369035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.801 [2024-07-25 10:11:06.487130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.801 [2024-07-25 10:11:06.487251] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:16.801 [2024-07-25 10:11:06.487273] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:16.801 [2024-07-25 10:11:06.487286] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1407055 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1407055 ']' 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1407055 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1407055 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1407055' 00:04:17.061 killing process with pid 1407055 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1407055 00:04:17.061 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1407055 00:04:17.319 00:04:17.319 real 0m1.183s 00:04:17.319 user 0m1.446s 00:04:17.319 sys 0m0.403s 00:04:17.319 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.319 10:11:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.319 ************************************ 00:04:17.319 END TEST exit_on_failed_rpc_init 00:04:17.319 ************************************ 00:04:17.319 10:11:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.319 00:04:17.319 real 0m13.315s 00:04:17.319 user 0m12.869s 00:04:17.319 sys 0m1.537s 00:04:17.319 10:11:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.319 10:11:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.319 ************************************ 00:04:17.319 END TEST skip_rpc 00:04:17.319 ************************************ 00:04:17.319 10:11:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.319 10:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.319 10:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.319 10:11:07 -- common/autotest_common.sh@10 -- # set +x 00:04:17.319 ************************************ 00:04:17.319 START TEST rpc_client 00:04:17.319 ************************************ 00:04:17.319 10:11:07 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.319 * Looking for test storage... 00:04:17.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:17.320 10:11:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:17.578 OK 00:04:17.578 10:11:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.578 00:04:17.578 real 0m0.066s 00:04:17.578 user 0m0.028s 00:04:17.578 sys 0m0.042s 00:04:17.578 10:11:07 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.578 10:11:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.578 ************************************ 00:04:17.578 END TEST rpc_client 00:04:17.578 ************************************ 00:04:17.578 10:11:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.578 10:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.578 10:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.578 10:11:07 -- common/autotest_common.sh@10 -- # set +x 00:04:17.578 ************************************ 00:04:17.578 START TEST json_config 00:04:17.578 ************************************ 00:04:17.578 10:11:07 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.578 10:11:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:17.578 10:11:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:17.578 10:11:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.578 10:11:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.578 10:11:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.578 10:11:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:17.579 10:11:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.579 10:11:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.579 10:11:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.579 10:11:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.579 10:11:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.579 10:11:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.579 10:11:07 json_config -- paths/export.sh@5 -- # export PATH 00:04:17.579 10:11:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@47 -- # : 0 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:17.579 10:11:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:17.579 INFO: JSON configuration test init 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.579 10:11:07 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:17.579 10:11:07 json_config -- json_config/common.sh@9 -- # local app=target 00:04:17.579 10:11:07 json_config -- json_config/common.sh@10 -- # shift 00:04:17.579 10:11:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:17.579 10:11:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:17.579 10:11:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:17.579 10:11:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.579 10:11:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.579 10:11:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1407285 00:04:17.579 10:11:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:17.579 10:11:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:17.579 Waiting for target to run... 00:04:17.579 10:11:07 json_config -- json_config/common.sh@25 -- # waitforlisten 1407285 /var/tmp/spdk_tgt.sock 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 1407285 ']' 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.579 10:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.579 [2024-07-25 10:11:07.271278] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:17.579 [2024-07-25 10:11:07.271384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407285 ] 00:04:17.579 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.855 [2024-07-25 10:11:07.582096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.153 [2024-07-25 10:11:07.683805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:18.721 10:11:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.721 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.721 10:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:18.721 10:11:08 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:18.721 10:11:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:22.007 10:11:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.007 10:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:22.007 10:11:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:22.007 10:11:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@51 -- # sort 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:22.265 10:11:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.265 10:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:22.265 10:11:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.265 10:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:22.265 10:11:11 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:22.266 10:11:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.266 10:11:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.523 MallocForNvmf0 00:04:22.523 10:11:12 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.523 10:11:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.782 MallocForNvmf1 00:04:22.782 10:11:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.782 10:11:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.040 [2024-07-25 10:11:12.732873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.040 10:11:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.040 10:11:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.298 10:11:13 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.298 10:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.863 10:11:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.863 10:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.120 10:11:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.120 10:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.378 [2024-07-25 10:11:13.916933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.378 10:11:13 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:24.378 10:11:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.378 10:11:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 10:11:13 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:24.378 10:11:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.378 10:11:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 10:11:13 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:24.378 10:11:13 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.378 10:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.635 MallocBdevForConfigChangeCheck 00:04:24.635 10:11:14 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:24.635 10:11:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.635 10:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.635 10:11:14 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:24.635 10:11:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.893 10:11:14 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:24.893 INFO: shutting down applications... 00:04:24.893 10:11:14 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:24.893 10:11:14 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:24.893 10:11:14 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:24.893 10:11:14 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:26.793 Calling clear_iscsi_subsystem 00:04:26.793 Calling clear_nvmf_subsystem 00:04:26.793 Calling clear_nbd_subsystem 00:04:26.793 Calling clear_ublk_subsystem 00:04:26.793 Calling clear_vhost_blk_subsystem 00:04:26.793 Calling clear_vhost_scsi_subsystem 00:04:26.793 Calling clear_bdev_subsystem 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.793 10:11:16 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:27.050 10:11:16 json_config -- json_config/json_config.sh@349 -- # break 00:04:27.050 10:11:16 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:27.050 10:11:16 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:27.050 10:11:16 json_config -- json_config/common.sh@31 -- # local app=target 00:04:27.050 10:11:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.050 10:11:16 json_config -- json_config/common.sh@35 -- # [[ -n 1407285 ]] 00:04:27.050 10:11:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1407285 00:04:27.050 10:11:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.050 10:11:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.050 10:11:16 json_config -- json_config/common.sh@41 -- # kill -0 1407285 00:04:27.050 10:11:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.616 10:11:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.616 10:11:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.616 10:11:17 json_config -- json_config/common.sh@41 -- # kill -0 1407285 00:04:27.616 10:11:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.616 10:11:17 json_config -- json_config/common.sh@43 -- # break 00:04:27.616 10:11:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.616 10:11:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.616 SPDK target shutdown done 00:04:27.616 10:11:17 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:27.616 INFO: relaunching applications... 00:04:27.616 10:11:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.616 10:11:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.616 10:11:17 json_config -- json_config/common.sh@10 -- # shift 00:04:27.616 10:11:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.616 10:11:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.616 10:11:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.616 10:11:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.616 10:11:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.616 10:11:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1408318 00:04:27.616 10:11:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.616 10:11:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.616 Waiting for target to run... 00:04:27.616 10:11:17 json_config -- json_config/common.sh@25 -- # waitforlisten 1408318 /var/tmp/spdk_tgt.sock 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@831 -- # '[' -z 1408318 ']' 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.616 10:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.616 [2024-07-25 10:11:17.250132] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:27.616 [2024-07-25 10:11:17.250233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408318 ] 00:04:27.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.879 [2024-07-25 10:11:17.565289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.138 [2024-07-25 10:11:17.661162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.421 [2024-07-25 10:11:20.682327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.421 [2024-07-25 10:11:20.714714] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.421 10:11:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.421 10:11:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.421 10:11:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.421 00:04:31.421 10:11:20 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:31.421 10:11:20 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.421 INFO: Checking if target configuration is the same... 00:04:31.421 10:11:20 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.421 10:11:20 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:31.421 10:11:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.421 + '[' 2 -ne 2 ']' 00:04:31.421 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.421 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.421 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.421 +++ basename /dev/fd/62 00:04:31.421 ++ mktemp /tmp/62.XXX 00:04:31.421 + tmp_file_1=/tmp/62.GRc 00:04:31.421 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.421 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.421 + tmp_file_2=/tmp/spdk_tgt_config.json.8x4 00:04:31.421 + ret=0 00:04:31.421 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.421 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.678 + diff -u /tmp/62.GRc /tmp/spdk_tgt_config.json.8x4 00:04:31.678 + echo 'INFO: JSON config files are the same' 00:04:31.678 INFO: JSON config files are the same 00:04:31.678 + rm /tmp/62.GRc /tmp/spdk_tgt_config.json.8x4 00:04:31.678 + exit 0 00:04:31.678 10:11:21 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:31.678 10:11:21 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.678 INFO: changing configuration and checking if this can be detected... 00:04:31.678 10:11:21 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.678 10:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.934 10:11:21 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.934 10:11:21 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:31.934 10:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.934 + '[' 2 -ne 2 ']' 00:04:31.934 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.934 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.934 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.934 +++ basename /dev/fd/62 00:04:31.934 ++ mktemp /tmp/62.XXX 00:04:31.934 + tmp_file_1=/tmp/62.8ZV 00:04:31.934 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.934 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.934 + tmp_file_2=/tmp/spdk_tgt_config.json.WIL 00:04:31.934 + ret=0 00:04:31.934 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.191 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.448 + diff -u /tmp/62.8ZV /tmp/spdk_tgt_config.json.WIL 00:04:32.448 + ret=1 00:04:32.448 + echo '=== Start of file: /tmp/62.8ZV ===' 00:04:32.448 + cat /tmp/62.8ZV 00:04:32.448 + echo '=== End of file: /tmp/62.8ZV ===' 00:04:32.448 + echo '' 00:04:32.448 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WIL ===' 00:04:32.448 + cat /tmp/spdk_tgt_config.json.WIL 00:04:32.448 + echo '=== End of file: /tmp/spdk_tgt_config.json.WIL ===' 00:04:32.448 + echo '' 00:04:32.448 + rm /tmp/62.8ZV /tmp/spdk_tgt_config.json.WIL 00:04:32.448 + exit 1 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:32.448 INFO: configuration change detected. 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@321 -- # [[ -n 1408318 ]] 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.448 10:11:22 json_config -- json_config/json_config.sh@327 -- # killprocess 1408318 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@950 -- # '[' -z 1408318 ']' 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@954 -- # kill -0 1408318 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.448 10:11:22 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1408318 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1408318' 00:04:32.449 killing process with pid 1408318 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@969 -- # kill 1408318 00:04:32.449 10:11:22 json_config -- common/autotest_common.sh@974 -- # wait 1408318 00:04:34.346 10:11:23 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.346 10:11:23 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:34.346 10:11:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.346 10:11:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 10:11:23 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:34.346 10:11:23 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:34.346 INFO: Success 00:04:34.346 00:04:34.346 real 0m16.521s 00:04:34.346 user 0m19.309s 00:04:34.346 sys 0m1.854s 00:04:34.346 10:11:23 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.346 10:11:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 ************************************ 00:04:34.346 END TEST json_config 00:04:34.346 ************************************ 00:04:34.346 10:11:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.346 10:11:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.346 10:11:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.346 10:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 ************************************ 00:04:34.346 START TEST json_config_extra_key 00:04:34.346 ************************************ 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:34.346 10:11:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.346 10:11:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.346 10:11:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.346 10:11:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.346 10:11:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.346 10:11:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.346 10:11:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:34.346 10:11:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:34.346 10:11:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:34.346 INFO: launching applications... 00:04:34.346 10:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1409038 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.346 Waiting for target to run... 00:04:34.346 10:11:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1409038 /var/tmp/spdk_tgt.sock 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1409038 ']' 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.346 10:11:23 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.347 10:11:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.347 [2024-07-25 10:11:23.863338] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:34.347 [2024-07-25 10:11:23.863443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409038 ] 00:04:34.347 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.604 [2024-07-25 10:11:24.215504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.605 [2024-07-25 10:11:24.311306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.171 10:11:24 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.171 10:11:24 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:35.171 00:04:35.171 10:11:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:35.171 INFO: shutting down applications... 00:04:35.171 10:11:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1409038 ]] 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1409038 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1409038 00:04:35.171 10:11:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1409038 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.738 10:11:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.738 SPDK target shutdown done 00:04:35.738 10:11:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:35.738 Success 00:04:35.738 00:04:35.738 real 0m1.676s 00:04:35.738 user 0m1.618s 00:04:35.738 sys 0m0.464s 00:04:35.738 10:11:25 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.738 10:11:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.738 ************************************ 00:04:35.738 END TEST json_config_extra_key 00:04:35.738 ************************************ 00:04:35.738 10:11:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.738 10:11:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.738 10:11:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.738 10:11:25 -- common/autotest_common.sh@10 -- # set +x 00:04:35.738 ************************************ 00:04:35.738 START TEST alias_rpc 00:04:35.738 ************************************ 00:04:35.738 10:11:25 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.738 * Looking for test storage... 00:04:35.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:35.996 10:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.996 10:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1409280 00:04:35.996 10:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.996 10:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1409280 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1409280 ']' 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.996 10:11:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.996 [2024-07-25 10:11:25.574630] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:35.996 [2024-07-25 10:11:25.574736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409280 ] 00:04:35.996 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.996 [2024-07-25 10:11:25.635588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.996 [2024-07-25 10:11:25.752567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.254 10:11:25 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.254 10:11:25 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.254 10:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:36.819 10:11:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1409280 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1409280 ']' 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1409280 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409280 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409280' 00:04:36.819 killing process with pid 1409280 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@969 -- # kill 1409280 00:04:36.819 10:11:26 alias_rpc -- common/autotest_common.sh@974 -- # wait 1409280 00:04:37.078 00:04:37.078 real 0m1.187s 00:04:37.078 user 0m1.364s 00:04:37.078 sys 0m0.405s 00:04:37.078 10:11:26 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.078 10:11:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.078 ************************************ 00:04:37.078 END TEST alias_rpc 00:04:37.078 ************************************ 00:04:37.078 10:11:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:37.078 10:11:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.078 10:11:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.078 10:11:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.078 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.078 ************************************ 00:04:37.078 START TEST spdkcli_tcp 00:04:37.078 ************************************ 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.078 * Looking for test storage... 00:04:37.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1409433 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.078 10:11:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1409433 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1409433 ']' 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.078 10:11:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.078 [2024-07-25 10:11:26.823140] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:37.078 [2024-07-25 10:11:26.823245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409433 ] 00:04:37.078 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.336 [2024-07-25 10:11:26.883555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.336 [2024-07-25 10:11:27.001509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.336 [2024-07-25 10:11:27.001537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.594 10:11:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.594 10:11:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:37.594 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1409453 00:04:37.594 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.594 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.854 [ 00:04:37.854 "bdev_malloc_delete", 00:04:37.854 "bdev_malloc_create", 00:04:37.854 "bdev_null_resize", 00:04:37.854 "bdev_null_delete", 00:04:37.854 "bdev_null_create", 00:04:37.854 "bdev_nvme_cuse_unregister", 00:04:37.854 "bdev_nvme_cuse_register", 00:04:37.854 "bdev_opal_new_user", 00:04:37.854 "bdev_opal_set_lock_state", 00:04:37.854 "bdev_opal_delete", 00:04:37.854 "bdev_opal_get_info", 00:04:37.854 "bdev_opal_create", 00:04:37.854 "bdev_nvme_opal_revert", 00:04:37.854 "bdev_nvme_opal_init", 00:04:37.854 "bdev_nvme_send_cmd", 00:04:37.854 "bdev_nvme_get_path_iostat", 00:04:37.854 "bdev_nvme_get_mdns_discovery_info", 00:04:37.854 "bdev_nvme_stop_mdns_discovery", 00:04:37.854 "bdev_nvme_start_mdns_discovery", 00:04:37.854 "bdev_nvme_set_multipath_policy", 00:04:37.854 "bdev_nvme_set_preferred_path", 00:04:37.854 "bdev_nvme_get_io_paths", 00:04:37.854 "bdev_nvme_remove_error_injection", 00:04:37.854 "bdev_nvme_add_error_injection", 00:04:37.854 "bdev_nvme_get_discovery_info", 00:04:37.854 "bdev_nvme_stop_discovery", 00:04:37.854 "bdev_nvme_start_discovery", 00:04:37.854 "bdev_nvme_get_controller_health_info", 00:04:37.854 "bdev_nvme_disable_controller", 00:04:37.854 "bdev_nvme_enable_controller", 00:04:37.854 "bdev_nvme_reset_controller", 00:04:37.854 "bdev_nvme_get_transport_statistics", 00:04:37.854 "bdev_nvme_apply_firmware", 00:04:37.854 "bdev_nvme_detach_controller", 00:04:37.854 "bdev_nvme_get_controllers", 00:04:37.854 "bdev_nvme_attach_controller", 00:04:37.854 "bdev_nvme_set_hotplug", 00:04:37.854 "bdev_nvme_set_options", 00:04:37.854 "bdev_passthru_delete", 00:04:37.854 "bdev_passthru_create", 00:04:37.854 "bdev_lvol_set_parent_bdev", 00:04:37.854 "bdev_lvol_set_parent", 00:04:37.854 "bdev_lvol_check_shallow_copy", 00:04:37.854 "bdev_lvol_start_shallow_copy", 00:04:37.854 "bdev_lvol_grow_lvstore", 00:04:37.854 "bdev_lvol_get_lvols", 00:04:37.854 "bdev_lvol_get_lvstores", 00:04:37.854 "bdev_lvol_delete", 00:04:37.854 "bdev_lvol_set_read_only", 00:04:37.854 "bdev_lvol_resize", 00:04:37.854 "bdev_lvol_decouple_parent", 00:04:37.854 "bdev_lvol_inflate", 00:04:37.854 "bdev_lvol_rename", 00:04:37.854 "bdev_lvol_clone_bdev", 00:04:37.854 "bdev_lvol_clone", 00:04:37.854 "bdev_lvol_snapshot", 00:04:37.854 "bdev_lvol_create", 00:04:37.854 "bdev_lvol_delete_lvstore", 00:04:37.854 "bdev_lvol_rename_lvstore", 00:04:37.854 "bdev_lvol_create_lvstore", 00:04:37.854 "bdev_raid_set_options", 00:04:37.854 "bdev_raid_remove_base_bdev", 00:04:37.854 "bdev_raid_add_base_bdev", 00:04:37.854 "bdev_raid_delete", 00:04:37.854 "bdev_raid_create", 00:04:37.854 "bdev_raid_get_bdevs", 00:04:37.854 "bdev_error_inject_error", 00:04:37.854 "bdev_error_delete", 00:04:37.854 "bdev_error_create", 00:04:37.854 "bdev_split_delete", 00:04:37.854 "bdev_split_create", 00:04:37.854 "bdev_delay_delete", 00:04:37.854 "bdev_delay_create", 00:04:37.854 "bdev_delay_update_latency", 00:04:37.854 "bdev_zone_block_delete", 00:04:37.854 "bdev_zone_block_create", 00:04:37.854 "blobfs_create", 00:04:37.854 "blobfs_detect", 00:04:37.854 "blobfs_set_cache_size", 00:04:37.854 "bdev_aio_delete", 00:04:37.854 "bdev_aio_rescan", 00:04:37.854 "bdev_aio_create", 00:04:37.854 "bdev_ftl_set_property", 00:04:37.854 "bdev_ftl_get_properties", 00:04:37.854 "bdev_ftl_get_stats", 00:04:37.854 "bdev_ftl_unmap", 00:04:37.854 "bdev_ftl_unload", 00:04:37.854 "bdev_ftl_delete", 00:04:37.854 "bdev_ftl_load", 00:04:37.854 "bdev_ftl_create", 00:04:37.854 "bdev_virtio_attach_controller", 00:04:37.854 "bdev_virtio_scsi_get_devices", 00:04:37.854 "bdev_virtio_detach_controller", 00:04:37.854 "bdev_virtio_blk_set_hotplug", 00:04:37.854 "bdev_iscsi_delete", 00:04:37.854 "bdev_iscsi_create", 00:04:37.854 "bdev_iscsi_set_options", 00:04:37.854 "accel_error_inject_error", 00:04:37.854 "ioat_scan_accel_module", 00:04:37.854 "dsa_scan_accel_module", 00:04:37.854 "iaa_scan_accel_module", 00:04:37.854 "vfu_virtio_create_scsi_endpoint", 00:04:37.854 "vfu_virtio_scsi_remove_target", 00:04:37.854 "vfu_virtio_scsi_add_target", 00:04:37.854 "vfu_virtio_create_blk_endpoint", 00:04:37.854 "vfu_virtio_delete_endpoint", 00:04:37.854 "keyring_file_remove_key", 00:04:37.854 "keyring_file_add_key", 00:04:37.854 "keyring_linux_set_options", 00:04:37.854 "iscsi_get_histogram", 00:04:37.854 "iscsi_enable_histogram", 00:04:37.854 "iscsi_set_options", 00:04:37.854 "iscsi_get_auth_groups", 00:04:37.854 "iscsi_auth_group_remove_secret", 00:04:37.854 "iscsi_auth_group_add_secret", 00:04:37.854 "iscsi_delete_auth_group", 00:04:37.854 "iscsi_create_auth_group", 00:04:37.854 "iscsi_set_discovery_auth", 00:04:37.854 "iscsi_get_options", 00:04:37.854 "iscsi_target_node_request_logout", 00:04:37.854 "iscsi_target_node_set_redirect", 00:04:37.854 "iscsi_target_node_set_auth", 00:04:37.854 "iscsi_target_node_add_lun", 00:04:37.854 "iscsi_get_stats", 00:04:37.854 "iscsi_get_connections", 00:04:37.854 "iscsi_portal_group_set_auth", 00:04:37.854 "iscsi_start_portal_group", 00:04:37.854 "iscsi_delete_portal_group", 00:04:37.854 "iscsi_create_portal_group", 00:04:37.854 "iscsi_get_portal_groups", 00:04:37.854 "iscsi_delete_target_node", 00:04:37.854 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.854 "iscsi_target_node_add_pg_ig_maps", 00:04:37.854 "iscsi_create_target_node", 00:04:37.854 "iscsi_get_target_nodes", 00:04:37.854 "iscsi_delete_initiator_group", 00:04:37.854 "iscsi_initiator_group_remove_initiators", 00:04:37.854 "iscsi_initiator_group_add_initiators", 00:04:37.854 "iscsi_create_initiator_group", 00:04:37.854 "iscsi_get_initiator_groups", 00:04:37.854 "nvmf_set_crdt", 00:04:37.854 "nvmf_set_config", 00:04:37.854 "nvmf_set_max_subsystems", 00:04:37.854 "nvmf_stop_mdns_prr", 00:04:37.854 "nvmf_publish_mdns_prr", 00:04:37.855 "nvmf_subsystem_get_listeners", 00:04:37.855 "nvmf_subsystem_get_qpairs", 00:04:37.855 "nvmf_subsystem_get_controllers", 00:04:37.855 "nvmf_get_stats", 00:04:37.855 "nvmf_get_transports", 00:04:37.855 "nvmf_create_transport", 00:04:37.855 "nvmf_get_targets", 00:04:37.855 "nvmf_delete_target", 00:04:37.855 "nvmf_create_target", 00:04:37.855 "nvmf_subsystem_allow_any_host", 00:04:37.855 "nvmf_subsystem_remove_host", 00:04:37.855 "nvmf_subsystem_add_host", 00:04:37.855 "nvmf_ns_remove_host", 00:04:37.855 "nvmf_ns_add_host", 00:04:37.855 "nvmf_subsystem_remove_ns", 00:04:37.855 "nvmf_subsystem_add_ns", 00:04:37.855 "nvmf_subsystem_listener_set_ana_state", 00:04:37.855 "nvmf_discovery_get_referrals", 00:04:37.855 "nvmf_discovery_remove_referral", 00:04:37.855 "nvmf_discovery_add_referral", 00:04:37.855 "nvmf_subsystem_remove_listener", 00:04:37.855 "nvmf_subsystem_add_listener", 00:04:37.855 "nvmf_delete_subsystem", 00:04:37.855 "nvmf_create_subsystem", 00:04:37.855 "nvmf_get_subsystems", 00:04:37.855 "env_dpdk_get_mem_stats", 00:04:37.855 "nbd_get_disks", 00:04:37.855 "nbd_stop_disk", 00:04:37.855 "nbd_start_disk", 00:04:37.855 "ublk_recover_disk", 00:04:37.855 "ublk_get_disks", 00:04:37.855 "ublk_stop_disk", 00:04:37.855 "ublk_start_disk", 00:04:37.855 "ublk_destroy_target", 00:04:37.855 "ublk_create_target", 00:04:37.855 "virtio_blk_create_transport", 00:04:37.855 "virtio_blk_get_transports", 00:04:37.855 "vhost_controller_set_coalescing", 00:04:37.855 "vhost_get_controllers", 00:04:37.855 "vhost_delete_controller", 00:04:37.855 "vhost_create_blk_controller", 00:04:37.855 "vhost_scsi_controller_remove_target", 00:04:37.855 "vhost_scsi_controller_add_target", 00:04:37.855 "vhost_start_scsi_controller", 00:04:37.855 "vhost_create_scsi_controller", 00:04:37.855 "thread_set_cpumask", 00:04:37.855 "framework_get_governor", 00:04:37.855 "framework_get_scheduler", 00:04:37.855 "framework_set_scheduler", 00:04:37.855 "framework_get_reactors", 00:04:37.855 "thread_get_io_channels", 00:04:37.855 "thread_get_pollers", 00:04:37.855 "thread_get_stats", 00:04:37.855 "framework_monitor_context_switch", 00:04:37.855 "spdk_kill_instance", 00:04:37.855 "log_enable_timestamps", 00:04:37.855 "log_get_flags", 00:04:37.855 "log_clear_flag", 00:04:37.855 "log_set_flag", 00:04:37.855 "log_get_level", 00:04:37.855 "log_set_level", 00:04:37.855 "log_get_print_level", 00:04:37.855 "log_set_print_level", 00:04:37.855 "framework_enable_cpumask_locks", 00:04:37.855 "framework_disable_cpumask_locks", 00:04:37.855 "framework_wait_init", 00:04:37.855 "framework_start_init", 00:04:37.855 "scsi_get_devices", 00:04:37.855 "bdev_get_histogram", 00:04:37.855 "bdev_enable_histogram", 00:04:37.855 "bdev_set_qos_limit", 00:04:37.855 "bdev_set_qd_sampling_period", 00:04:37.855 "bdev_get_bdevs", 00:04:37.855 "bdev_reset_iostat", 00:04:37.855 "bdev_get_iostat", 00:04:37.855 "bdev_examine", 00:04:37.855 "bdev_wait_for_examine", 00:04:37.855 "bdev_set_options", 00:04:37.855 "notify_get_notifications", 00:04:37.855 "notify_get_types", 00:04:37.855 "accel_get_stats", 00:04:37.855 "accel_set_options", 00:04:37.855 "accel_set_driver", 00:04:37.855 "accel_crypto_key_destroy", 00:04:37.855 "accel_crypto_keys_get", 00:04:37.855 "accel_crypto_key_create", 00:04:37.855 "accel_assign_opc", 00:04:37.855 "accel_get_module_info", 00:04:37.855 "accel_get_opc_assignments", 00:04:37.855 "vmd_rescan", 00:04:37.855 "vmd_remove_device", 00:04:37.855 "vmd_enable", 00:04:37.855 "sock_get_default_impl", 00:04:37.855 "sock_set_default_impl", 00:04:37.855 "sock_impl_set_options", 00:04:37.855 "sock_impl_get_options", 00:04:37.855 "iobuf_get_stats", 00:04:37.855 "iobuf_set_options", 00:04:37.855 "keyring_get_keys", 00:04:37.855 "framework_get_pci_devices", 00:04:37.855 "framework_get_config", 00:04:37.855 "framework_get_subsystems", 00:04:37.855 "vfu_tgt_set_base_path", 00:04:37.855 "trace_get_info", 00:04:37.855 "trace_get_tpoint_group_mask", 00:04:37.855 "trace_disable_tpoint_group", 00:04:37.855 "trace_enable_tpoint_group", 00:04:37.855 "trace_clear_tpoint_mask", 00:04:37.855 "trace_set_tpoint_mask", 00:04:37.855 "spdk_get_version", 00:04:37.855 "rpc_get_methods" 00:04:37.855 ] 00:04:37.855 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.855 10:11:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1409433 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1409433 ']' 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1409433 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409433 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409433' 00:04:37.855 killing process with pid 1409433 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1409433 00:04:37.855 10:11:27 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1409433 00:04:38.423 00:04:38.423 real 0m1.207s 00:04:38.423 user 0m2.191s 00:04:38.423 sys 0m0.409s 00:04:38.423 10:11:27 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.423 10:11:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.423 ************************************ 00:04:38.423 END TEST spdkcli_tcp 00:04:38.423 ************************************ 00:04:38.423 10:11:27 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.423 10:11:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.423 10:11:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.423 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:04:38.423 ************************************ 00:04:38.423 START TEST dpdk_mem_utility 00:04:38.424 ************************************ 00:04:38.424 10:11:27 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.424 * Looking for test storage... 00:04:38.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:38.424 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.424 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1409605 00:04:38.424 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1409605 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1409605 ']' 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.424 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.424 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.424 [2024-07-25 10:11:28.082631] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:38.424 [2024-07-25 10:11:28.082734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409605 ] 00:04:38.424 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.424 [2024-07-25 10:11:28.142683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.683 [2024-07-25 10:11:28.259660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.942 { 00:04:38.942 "filename": "/tmp/spdk_mem_dump.txt" 00:04:38.942 } 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.942 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:38.942 1 heaps totaling size 814.000000 MiB 00:04:38.942 size: 814.000000 MiB heap id: 0 00:04:38.942 end heaps---------- 00:04:38.942 8 mempools totaling size 598.116089 MiB 00:04:38.942 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.942 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.942 size: 84.521057 MiB name: bdev_io_1409605 00:04:38.942 size: 51.011292 MiB name: evtpool_1409605 00:04:38.942 size: 50.003479 MiB name: msgpool_1409605 00:04:38.942 size: 21.763794 MiB name: PDU_Pool 00:04:38.942 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.942 size: 0.026123 MiB name: Session_Pool 00:04:38.942 end mempools------- 00:04:38.942 6 memzones totaling size 4.142822 MiB 00:04:38.942 size: 1.000366 MiB name: RG_ring_0_1409605 00:04:38.942 size: 1.000366 MiB name: RG_ring_1_1409605 00:04:38.942 size: 1.000366 MiB name: RG_ring_4_1409605 00:04:38.942 size: 1.000366 MiB name: RG_ring_5_1409605 00:04:38.942 size: 0.125366 MiB name: RG_ring_2_1409605 00:04:38.942 size: 0.015991 MiB name: RG_ring_3_1409605 00:04:38.942 end memzones------- 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:38.942 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:38.942 list of free elements. size: 12.519348 MiB 00:04:38.942 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:38.942 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:38.942 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:38.942 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:38.942 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:38.942 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:38.942 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:38.942 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:38.942 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:38.942 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:38.942 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:38.942 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:38.942 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:38.942 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:38.942 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:38.942 list of standard malloc elements. size: 199.218079 MiB 00:04:38.942 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:38.942 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:38.942 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:38.942 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:38.942 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:38.942 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:38.942 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:38.942 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:38.942 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:38.942 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:38.942 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:38.942 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:38.942 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:38.942 list of memzone associated elements. size: 602.262573 MiB 00:04:38.942 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:38.942 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:38.942 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:38.942 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:38.942 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:38.942 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1409605_0 00:04:38.942 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:38.942 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1409605_0 00:04:38.942 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:38.942 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1409605_0 00:04:38.942 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:38.942 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:38.942 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:38.942 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:38.942 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:38.942 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1409605 00:04:38.942 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:38.942 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1409605 00:04:38.942 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:38.942 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1409605 00:04:38.942 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:38.942 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:38.942 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:38.942 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:38.942 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:38.942 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1409605 00:04:38.942 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1409605 00:04:38.942 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1409605 00:04:38.942 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1409605 00:04:38.942 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1409605 00:04:38.942 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:38.942 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:38.942 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:38.942 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:38.942 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:38.942 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1409605 00:04:38.942 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:38.942 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:38.942 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:38.942 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:38.942 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:38.942 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1409605 00:04:38.942 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:38.942 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:38.942 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:38.942 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1409605 00:04:38.942 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:38.942 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1409605 00:04:38.942 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:38.942 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:38.942 10:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1409605 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1409605 ']' 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1409605 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409605 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409605' 00:04:38.942 killing process with pid 1409605 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1409605 00:04:38.942 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1409605 00:04:39.510 00:04:39.510 real 0m1.018s 00:04:39.510 user 0m1.066s 00:04:39.510 sys 0m0.382s 00:04:39.510 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.510 10:11:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.510 ************************************ 00:04:39.510 END TEST dpdk_mem_utility 00:04:39.510 ************************************ 00:04:39.510 10:11:29 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.510 10:11:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.510 10:11:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.510 10:11:29 -- common/autotest_common.sh@10 -- # set +x 00:04:39.510 ************************************ 00:04:39.510 START TEST event 00:04:39.510 ************************************ 00:04:39.510 10:11:29 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.510 * Looking for test storage... 00:04:39.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.510 10:11:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:39.510 10:11:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:39.510 10:11:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.510 10:11:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:39.510 10:11:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.510 10:11:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.510 ************************************ 00:04:39.510 START TEST event_perf 00:04:39.510 ************************************ 00:04:39.510 10:11:29 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.510 Running I/O for 1 seconds...[2024-07-25 10:11:29.137889] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:39.510 [2024-07-25 10:11:29.137969] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409761 ] 00:04:39.510 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.510 [2024-07-25 10:11:29.200458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.768 [2024-07-25 10:11:29.323509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.768 [2024-07-25 10:11:29.323553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.768 [2024-07-25 10:11:29.323605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.768 [2024-07-25 10:11:29.323608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.704 Running I/O for 1 seconds... 00:04:40.704 lcore 0: 229518 00:04:40.704 lcore 1: 229517 00:04:40.704 lcore 2: 229517 00:04:40.704 lcore 3: 229517 00:04:40.704 done. 00:04:40.704 00:04:40.704 real 0m1.310s 00:04:40.704 user 0m4.221s 00:04:40.704 sys 0m0.079s 00:04:40.704 10:11:30 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.704 10:11:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.704 ************************************ 00:04:40.704 END TEST event_perf 00:04:40.704 ************************************ 00:04:40.704 10:11:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.704 10:11:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:40.704 10:11:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.704 10:11:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.962 ************************************ 00:04:40.962 START TEST event_reactor 00:04:40.962 ************************************ 00:04:40.962 10:11:30 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.962 [2024-07-25 10:11:30.506228] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:40.962 [2024-07-25 10:11:30.506306] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409898 ] 00:04:40.962 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.962 [2024-07-25 10:11:30.565436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.962 [2024-07-25 10:11:30.685628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.335 test_start 00:04:42.335 oneshot 00:04:42.335 tick 100 00:04:42.335 tick 100 00:04:42.335 tick 250 00:04:42.335 tick 100 00:04:42.335 tick 100 00:04:42.335 tick 100 00:04:42.335 tick 250 00:04:42.335 tick 500 00:04:42.335 tick 100 00:04:42.335 tick 100 00:04:42.335 tick 250 00:04:42.335 tick 100 00:04:42.335 tick 100 00:04:42.335 test_end 00:04:42.335 00:04:42.335 real 0m1.303s 00:04:42.335 user 0m1.221s 00:04:42.335 sys 0m0.075s 00:04:42.335 10:11:31 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.335 10:11:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:42.335 ************************************ 00:04:42.335 END TEST event_reactor 00:04:42.335 ************************************ 00:04:42.335 10:11:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.335 10:11:31 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:42.335 10:11:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.335 10:11:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.335 ************************************ 00:04:42.335 START TEST event_reactor_perf 00:04:42.335 ************************************ 00:04:42.335 10:11:31 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.335 [2024-07-25 10:11:31.864121] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:42.335 [2024-07-25 10:11:31.864195] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410056 ] 00:04:42.335 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.335 [2024-07-25 10:11:31.923768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.335 [2024-07-25 10:11:32.044638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.710 test_start 00:04:43.710 test_end 00:04:43.710 Performance: 327193 events per second 00:04:43.710 00:04:43.710 real 0m1.303s 00:04:43.710 user 0m1.220s 00:04:43.710 sys 0m0.075s 00:04:43.710 10:11:33 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.710 10:11:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.710 ************************************ 00:04:43.710 END TEST event_reactor_perf 00:04:43.710 ************************************ 00:04:43.710 10:11:33 event -- event/event.sh@49 -- # uname -s 00:04:43.710 10:11:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.710 10:11:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.710 10:11:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.710 10:11:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.710 10:11:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.710 ************************************ 00:04:43.710 START TEST event_scheduler 00:04:43.710 ************************************ 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.710 * Looking for test storage... 00:04:43.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:43.710 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.710 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1410249 00:04:43.710 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.710 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.710 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1410249 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1410249 ']' 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.710 10:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.710 [2024-07-25 10:11:33.316291] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:43.710 [2024-07-25 10:11:33.316388] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410249 ] 00:04:43.710 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.710 [2024-07-25 10:11:33.379354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.969 [2024-07-25 10:11:33.500022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.969 [2024-07-25 10:11:33.500071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.969 [2024-07-25 10:11:33.500123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.969 [2024-07-25 10:11:33.500126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:43.969 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 [2024-07-25 10:11:33.573027] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:43.969 [2024-07-25 10:11:33.573059] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:43.969 [2024-07-25 10:11:33.573078] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:43.969 [2024-07-25 10:11:33.573092] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:43.969 [2024-07-25 10:11:33.573104] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 [2024-07-25 10:11:33.659726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 ************************************ 00:04:43.969 START TEST scheduler_create_thread 00:04:43.969 ************************************ 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 2 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 3 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 4 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 5 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 6 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 7 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.969 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 8 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 9 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 10 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.227 10:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.159 10:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.159 00:04:45.159 real 0m1.171s 00:04:45.159 user 0m0.011s 00:04:45.159 sys 0m0.004s 00:04:45.159 10:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.159 10:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.159 ************************************ 00:04:45.159 END TEST scheduler_create_thread 00:04:45.159 ************************************ 00:04:45.159 10:11:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.159 10:11:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1410249 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1410249 ']' 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1410249 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410249 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410249' 00:04:45.159 killing process with pid 1410249 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1410249 00:04:45.159 10:11:34 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1410249 00:04:45.780 [2024-07-25 10:11:35.341718] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:45.780 00:04:45.780 real 0m2.311s 00:04:45.780 user 0m2.723s 00:04:45.780 sys 0m0.318s 00:04:45.780 10:11:35 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.780 10:11:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.780 ************************************ 00:04:45.780 END TEST event_scheduler 00:04:45.780 ************************************ 00:04:45.780 10:11:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.039 10:11:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.039 10:11:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.039 10:11:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.039 10:11:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 ************************************ 00:04:46.039 START TEST app_repeat 00:04:46.039 ************************************ 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1410510 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1410510' 00:04:46.039 Process app_repeat pid: 1410510 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.039 spdk_app_start Round 0 00:04:46.039 10:11:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1410510 /var/tmp/spdk-nbd.sock 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1410510 ']' 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.039 10:11:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 [2024-07-25 10:11:35.614797] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:04:46.039 [2024-07-25 10:11:35.614874] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410510 ] 00:04:46.039 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.039 [2024-07-25 10:11:35.675563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.039 [2024-07-25 10:11:35.795630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.039 [2024-07-25 10:11:35.795664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.297 10:11:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.297 10:11:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:46.297 10:11:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.556 Malloc0 00:04:46.556 10:11:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.814 Malloc1 00:04:46.814 10:11:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.814 10:11:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.071 /dev/nbd0 00:04:47.071 10:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.071 10:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:47.071 10:11:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.329 1+0 records in 00:04:47.329 1+0 records out 00:04:47.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167519 s, 24.5 MB/s 00:04:47.329 10:11:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.329 10:11:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:47.329 10:11:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.329 10:11:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:47.329 10:11:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:47.329 10:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.329 10:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.329 10:11:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.587 /dev/nbd1 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.587 1+0 records in 00:04:47.587 1+0 records out 00:04:47.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233071 s, 17.6 MB/s 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:47.587 10:11:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.587 10:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.846 { 00:04:47.846 "nbd_device": "/dev/nbd0", 00:04:47.846 "bdev_name": "Malloc0" 00:04:47.846 }, 00:04:47.846 { 00:04:47.846 "nbd_device": "/dev/nbd1", 00:04:47.846 "bdev_name": "Malloc1" 00:04:47.846 } 00:04:47.846 ]' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.846 { 00:04:47.846 "nbd_device": "/dev/nbd0", 00:04:47.846 "bdev_name": "Malloc0" 00:04:47.846 }, 00:04:47.846 { 00:04:47.846 "nbd_device": "/dev/nbd1", 00:04:47.846 "bdev_name": "Malloc1" 00:04:47.846 } 00:04:47.846 ]' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.846 /dev/nbd1' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.846 /dev/nbd1' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.846 256+0 records in 00:04:47.846 256+0 records out 00:04:47.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587927 s, 178 MB/s 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.846 256+0 records in 00:04:47.846 256+0 records out 00:04:47.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256941 s, 40.8 MB/s 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.846 256+0 records in 00:04:47.846 256+0 records out 00:04:47.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273997 s, 38.3 MB/s 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.846 10:11:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.411 10:11:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.669 10:11:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.670 10:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.927 10:11:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.927 10:11:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.186 10:11:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.443 [2024-07-25 10:11:39.111406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.701 [2024-07-25 10:11:39.230836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.701 [2024-07-25 10:11:39.230856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.701 [2024-07-25 10:11:39.281567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.701 [2024-07-25 10:11:39.281658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.227 10:11:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.227 10:11:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.227 spdk_app_start Round 1 00:04:52.227 10:11:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1410510 /var/tmp/spdk-nbd.sock 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1410510 ']' 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.227 10:11:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.485 10:11:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.485 10:11:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:52.485 10:11:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.744 Malloc0 00:04:53.001 10:11:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.259 Malloc1 00:04:53.260 10:11:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.260 10:11:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.518 /dev/nbd0 00:04:53.518 10:11:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.518 10:11:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.518 1+0 records in 00:04:53.518 1+0 records out 00:04:53.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182119 s, 22.5 MB/s 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:53.518 10:11:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:53.518 10:11:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.518 10:11:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.518 10:11:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.776 /dev/nbd1 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.776 1+0 records in 00:04:53.776 1+0 records out 00:04:53.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235527 s, 17.4 MB/s 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:53.776 10:11:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.776 10:11:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.342 { 00:04:54.342 "nbd_device": "/dev/nbd0", 00:04:54.342 "bdev_name": "Malloc0" 00:04:54.342 }, 00:04:54.342 { 00:04:54.342 "nbd_device": "/dev/nbd1", 00:04:54.342 "bdev_name": "Malloc1" 00:04:54.342 } 00:04:54.342 ]' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.342 { 00:04:54.342 "nbd_device": "/dev/nbd0", 00:04:54.342 "bdev_name": "Malloc0" 00:04:54.342 }, 00:04:54.342 { 00:04:54.342 "nbd_device": "/dev/nbd1", 00:04:54.342 "bdev_name": "Malloc1" 00:04:54.342 } 00:04:54.342 ]' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.342 /dev/nbd1' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.342 /dev/nbd1' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.342 256+0 records in 00:04:54.342 256+0 records out 00:04:54.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590805 s, 177 MB/s 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.342 256+0 records in 00:04:54.342 256+0 records out 00:04:54.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260339 s, 40.3 MB/s 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.342 256+0 records in 00:04:54.342 256+0 records out 00:04:54.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269229 s, 38.9 MB/s 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.342 10:11:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.600 10:11:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.858 10:11:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.424 10:11:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.424 10:11:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.682 10:11:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.940 [2024-07-25 10:11:45.463884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.940 [2024-07-25 10:11:45.581890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.940 [2024-07-25 10:11:45.581922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.940 [2024-07-25 10:11:45.630672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.940 [2024-07-25 10:11:45.630746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.219 10:11:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.219 10:11:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:59.219 spdk_app_start Round 2 00:04:59.219 10:11:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1410510 /var/tmp/spdk-nbd.sock 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1410510 ']' 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.219 10:11:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:59.219 10:11:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.219 Malloc0 00:04:59.219 10:11:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.478 Malloc1 00:04:59.478 10:11:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.478 10:11:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.479 10:11:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.479 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.479 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.479 10:11:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.737 /dev/nbd0 00:04:59.737 10:11:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.737 10:11:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.737 1+0 records in 00:04:59.737 1+0 records out 00:04:59.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167092 s, 24.5 MB/s 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:59.737 10:11:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.995 10:11:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:59.995 10:11:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:59.995 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.995 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.995 10:11:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.270 /dev/nbd1 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.270 1+0 records in 00:05:00.270 1+0 records out 00:05:00.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211237 s, 19.4 MB/s 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.270 10:11:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.270 10:11:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.528 10:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.528 { 00:05:00.529 "nbd_device": "/dev/nbd0", 00:05:00.529 "bdev_name": "Malloc0" 00:05:00.529 }, 00:05:00.529 { 00:05:00.529 "nbd_device": "/dev/nbd1", 00:05:00.529 "bdev_name": "Malloc1" 00:05:00.529 } 00:05:00.529 ]' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.529 { 00:05:00.529 "nbd_device": "/dev/nbd0", 00:05:00.529 "bdev_name": "Malloc0" 00:05:00.529 }, 00:05:00.529 { 00:05:00.529 "nbd_device": "/dev/nbd1", 00:05:00.529 "bdev_name": "Malloc1" 00:05:00.529 } 00:05:00.529 ]' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.529 /dev/nbd1' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.529 /dev/nbd1' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.529 256+0 records in 00:05:00.529 256+0 records out 00:05:00.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586945 s, 179 MB/s 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.529 256+0 records in 00:05:00.529 256+0 records out 00:05:00.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025091 s, 41.8 MB/s 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.529 256+0 records in 00:05:00.529 256+0 records out 00:05:00.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275712 s, 38.0 MB/s 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.529 10:11:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.095 10:11:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.352 10:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.611 10:11:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.611 10:11:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.869 10:11:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.128 [2024-07-25 10:11:51.786720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.387 [2024-07-25 10:11:51.905987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.387 [2024-07-25 10:11:51.905987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.387 [2024-07-25 10:11:51.957307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.387 [2024-07-25 10:11:51.957387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.924 10:11:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1410510 /var/tmp/spdk-nbd.sock 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1410510 ']' 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.924 10:11:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:05.182 10:11:54 event.app_repeat -- event/event.sh@39 -- # killprocess 1410510 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1410510 ']' 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1410510 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410510 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410510' 00:05:05.182 killing process with pid 1410510 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1410510 00:05:05.182 10:11:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1410510 00:05:05.441 spdk_app_start is called in Round 0. 00:05:05.441 Shutdown signal received, stop current app iteration 00:05:05.441 Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 reinitialization... 00:05:05.441 spdk_app_start is called in Round 1. 00:05:05.441 Shutdown signal received, stop current app iteration 00:05:05.441 Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 reinitialization... 00:05:05.441 spdk_app_start is called in Round 2. 00:05:05.441 Shutdown signal received, stop current app iteration 00:05:05.441 Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 reinitialization... 00:05:05.441 spdk_app_start is called in Round 3. 00:05:05.441 Shutdown signal received, stop current app iteration 00:05:05.441 10:11:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:05.441 10:11:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:05.441 00:05:05.441 real 0m19.528s 00:05:05.441 user 0m43.333s 00:05:05.441 sys 0m3.531s 00:05:05.441 10:11:55 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.441 10:11:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.441 ************************************ 00:05:05.441 END TEST app_repeat 00:05:05.441 ************************************ 00:05:05.441 10:11:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:05.441 10:11:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.441 10:11:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.441 10:11:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.441 10:11:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.441 ************************************ 00:05:05.441 START TEST cpu_locks 00:05:05.441 ************************************ 00:05:05.441 10:11:55 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.699 * Looking for test storage... 00:05:05.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.700 10:11:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.700 10:11:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.700 10:11:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.700 10:11:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.700 10:11:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.700 10:11:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.700 10:11:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.700 ************************************ 00:05:05.700 START TEST default_locks 00:05:05.700 ************************************ 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1412537 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1412537 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1412537 ']' 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.700 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.700 [2024-07-25 10:11:55.313319] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:05.700 [2024-07-25 10:11:55.313418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412537 ] 00:05:05.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.700 [2024-07-25 10:11:55.377138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.957 [2024-07-25 10:11:55.497400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.957 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.957 10:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:05.957 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1412537 00:05:05.957 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.957 10:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1412537 00:05:06.524 lslocks: write error 00:05:06.524 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1412537 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1412537 ']' 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1412537 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412537 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412537' 00:05:06.525 killing process with pid 1412537 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1412537 00:05:06.525 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1412537 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1412537 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1412537 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1412537 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1412537 ']' 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1412537) - No such process 00:05:06.783 ERROR: process (pid: 1412537) is no longer running 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.783 00:05:06.783 real 0m1.201s 00:05:06.783 user 0m1.177s 00:05:06.783 sys 0m0.552s 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.783 10:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 ************************************ 00:05:06.783 END TEST default_locks 00:05:06.783 ************************************ 00:05:06.783 10:11:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.783 10:11:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.783 10:11:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.783 10:11:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 ************************************ 00:05:06.783 START TEST default_locks_via_rpc 00:05:06.783 ************************************ 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1412667 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1412667 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1412667 ']' 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.783 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.784 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.042 [2024-07-25 10:11:56.570691] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:07.042 [2024-07-25 10:11:56.570783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412667 ] 00:05:07.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.042 [2024-07-25 10:11:56.629832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.042 [2024-07-25 10:11:56.746589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1412667 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1412667 00:05:07.300 10:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1412667 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1412667 ']' 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1412667 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412667 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412667' 00:05:07.867 killing process with pid 1412667 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1412667 00:05:07.867 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1412667 00:05:08.125 00:05:08.125 real 0m1.193s 00:05:08.125 user 0m1.218s 00:05:08.125 sys 0m0.525s 00:05:08.125 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.125 10:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.125 ************************************ 00:05:08.125 END TEST default_locks_via_rpc 00:05:08.125 ************************************ 00:05:08.125 10:11:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:08.125 10:11:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.125 10:11:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.125 10:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.125 ************************************ 00:05:08.125 START TEST non_locking_app_on_locked_coremask 00:05:08.125 ************************************ 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1412806 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1412806 /var/tmp/spdk.sock 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1412806 ']' 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.125 10:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.125 [2024-07-25 10:11:57.814082] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:08.125 [2024-07-25 10:11:57.814179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412806 ] 00:05:08.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.126 [2024-07-25 10:11:57.876172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.384 [2024-07-25 10:11:57.996656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1412811 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1412811 /var/tmp/spdk2.sock 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1412811 ']' 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.643 10:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 [2024-07-25 10:11:58.283004] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:08.643 [2024-07-25 10:11:58.283096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412811 ] 00:05:08.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.643 [2024-07-25 10:11:58.374709] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.643 [2024-07-25 10:11:58.374761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.902 [2024-07-25 10:11:58.615113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.836 10:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.836 10:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:09.836 10:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1412806 00:05:09.836 10:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1412806 00:05:09.836 10:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.402 lslocks: write error 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1412806 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1412806 ']' 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1412806 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412806 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412806' 00:05:10.402 killing process with pid 1412806 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1412806 00:05:10.402 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1412806 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1412811 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1412811 ']' 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1412811 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412811 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412811' 00:05:10.972 killing process with pid 1412811 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1412811 00:05:10.972 10:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1412811 00:05:11.561 00:05:11.561 real 0m3.297s 00:05:11.561 user 0m3.622s 00:05:11.561 sys 0m1.099s 00:05:11.561 10:12:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.561 10:12:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.561 ************************************ 00:05:11.561 END TEST non_locking_app_on_locked_coremask 00:05:11.561 ************************************ 00:05:11.561 10:12:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:11.561 10:12:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.561 10:12:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.561 10:12:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.561 ************************************ 00:05:11.561 START TEST locking_app_on_unlocked_coremask 00:05:11.561 ************************************ 00:05:11.561 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:11.561 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1413199 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1413199 /var/tmp/spdk.sock 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413199 ']' 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.562 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.562 [2024-07-25 10:12:01.162748] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:11.562 [2024-07-25 10:12:01.162851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413199 ] 00:05:11.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.562 [2024-07-25 10:12:01.222157] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.562 [2024-07-25 10:12:01.222198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.823 [2024-07-25 10:12:01.341478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1413252 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1413252 /var/tmp/spdk2.sock 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413252 ']' 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.823 10:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.081 [2024-07-25 10:12:01.625119] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:12.081 [2024-07-25 10:12:01.625210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413252 ] 00:05:12.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.081 [2024-07-25 10:12:01.716071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.339 [2024-07-25 10:12:01.955715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.904 10:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.904 10:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:12.904 10:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1413252 00:05:12.904 10:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1413252 00:05:12.904 10:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.835 lslocks: write error 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1413199 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1413199 ']' 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1413199 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413199 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413199' 00:05:13.835 killing process with pid 1413199 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1413199 00:05:13.835 10:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1413199 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1413252 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1413252 ']' 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1413252 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413252 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413252' 00:05:14.401 killing process with pid 1413252 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1413252 00:05:14.401 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1413252 00:05:14.659 00:05:14.659 real 0m3.322s 00:05:14.659 user 0m3.676s 00:05:14.659 sys 0m1.076s 00:05:14.659 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.659 10:12:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.659 ************************************ 00:05:14.659 END TEST locking_app_on_unlocked_coremask 00:05:14.659 ************************************ 00:05:14.987 10:12:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:14.987 10:12:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.987 10:12:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.987 10:12:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.987 ************************************ 00:05:14.987 START TEST locking_app_on_locked_coremask 00:05:14.987 ************************************ 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1413599 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1413599 /var/tmp/spdk.sock 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413599 ']' 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.987 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.987 [2024-07-25 10:12:04.544265] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:14.987 [2024-07-25 10:12:04.544362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413599 ] 00:05:14.987 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.987 [2024-07-25 10:12:04.606680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.987 [2024-07-25 10:12:04.723808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1413692 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1413692 /var/tmp/spdk2.sock 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1413692 /var/tmp/spdk2.sock 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1413692 /var/tmp/spdk2.sock 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413692 ']' 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.246 10:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.246 [2024-07-25 10:12:05.016512] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:15.246 [2024-07-25 10:12:05.016611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413692 ] 00:05:15.504 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.504 [2024-07-25 10:12:05.107067] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1413599 has claimed it. 00:05:15.504 [2024-07-25 10:12:05.107137] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1413692) - No such process 00:05:16.070 ERROR: process (pid: 1413692) is no longer running 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1413599 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1413599 00:05:16.070 10:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.636 lslocks: write error 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1413599 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1413599 ']' 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1413599 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413599 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413599' 00:05:16.636 killing process with pid 1413599 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1413599 00:05:16.636 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1413599 00:05:16.895 00:05:16.895 real 0m2.024s 00:05:16.895 user 0m2.294s 00:05:16.895 sys 0m0.624s 00:05:16.895 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.895 10:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 ************************************ 00:05:16.895 END TEST locking_app_on_locked_coremask 00:05:16.895 ************************************ 00:05:16.895 10:12:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:16.895 10:12:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.895 10:12:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.895 10:12:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 ************************************ 00:05:16.895 START TEST locking_overlapped_coremask 00:05:16.895 ************************************ 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1413826 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1413826 /var/tmp/spdk.sock 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413826 ']' 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.895 10:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 [2024-07-25 10:12:06.620200] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:16.895 [2024-07-25 10:12:06.620312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413826 ] 00:05:16.895 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.153 [2024-07-25 10:12:06.680321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.153 [2024-07-25 10:12:06.799941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.153 [2024-07-25 10:12:06.800034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.153 [2024-07-25 10:12:06.800038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1413845 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1413845 /var/tmp/spdk2.sock 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1413845 /var/tmp/spdk2.sock 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1413845 /var/tmp/spdk2.sock 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1413845 ']' 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.412 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 [2024-07-25 10:12:07.091259] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:17.412 [2024-07-25 10:12:07.091350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413845 ] 00:05:17.412 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.412 [2024-07-25 10:12:07.180429] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1413826 has claimed it. 00:05:17.412 [2024-07-25 10:12:07.180493] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1413845) - No such process 00:05:18.346 ERROR: process (pid: 1413845) is no longer running 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1413826 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1413826 ']' 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1413826 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413826 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413826' 00:05:18.346 killing process with pid 1413826 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1413826 00:05:18.346 10:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1413826 00:05:18.604 00:05:18.604 real 0m1.619s 00:05:18.604 user 0m4.377s 00:05:18.604 sys 0m0.417s 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.604 ************************************ 00:05:18.604 END TEST locking_overlapped_coremask 00:05:18.604 ************************************ 00:05:18.604 10:12:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:18.604 10:12:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.604 10:12:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.604 10:12:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.604 ************************************ 00:05:18.604 START TEST locking_overlapped_coremask_via_rpc 00:05:18.604 ************************************ 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1414046 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1414046 /var/tmp/spdk.sock 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1414046 ']' 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.604 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.605 [2024-07-25 10:12:08.288308] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:18.605 [2024-07-25 10:12:08.288408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414046 ] 00:05:18.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.605 [2024-07-25 10:12:08.349035] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.605 [2024-07-25 10:12:08.349097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.862 [2024-07-25 10:12:08.471263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.862 [2024-07-25 10:12:08.471314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.862 [2024-07-25 10:12:08.471318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1414290 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1414290 /var/tmp/spdk2.sock 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1414290 ']' 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.120 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.121 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.121 10:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.121 [2024-07-25 10:12:08.758013] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:19.121 [2024-07-25 10:12:08.758107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414290 ] 00:05:19.121 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.121 [2024-07-25 10:12:08.850465] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.121 [2024-07-25 10:12:08.850520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.378 [2024-07-25 10:12:09.090506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.378 [2024-07-25 10:12:09.090534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:19.378 [2024-07-25 10:12:09.090537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:20.311 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.312 [2024-07-25 10:12:09.808593] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1414046 has claimed it. 00:05:20.312 request: 00:05:20.312 { 00:05:20.312 "method": "framework_enable_cpumask_locks", 00:05:20.312 "req_id": 1 00:05:20.312 } 00:05:20.312 Got JSON-RPC error response 00:05:20.312 response: 00:05:20.312 { 00:05:20.312 "code": -32603, 00:05:20.312 "message": "Failed to claim CPU core: 2" 00:05:20.312 } 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1414046 /var/tmp/spdk.sock 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1414046 ']' 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.312 10:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1414290 /var/tmp/spdk2.sock 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1414290 ']' 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.569 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:20.827 00:05:20.827 real 0m2.193s 00:05:20.827 user 0m1.254s 00:05:20.827 sys 0m0.197s 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.827 10:12:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.827 ************************************ 00:05:20.827 END TEST locking_overlapped_coremask_via_rpc 00:05:20.827 ************************************ 00:05:20.827 10:12:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:20.827 10:12:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1414046 ]] 00:05:20.827 10:12:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1414046 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1414046 ']' 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1414046 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414046 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414046' 00:05:20.827 killing process with pid 1414046 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1414046 00:05:20.827 10:12:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1414046 00:05:21.087 10:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1414290 ]] 00:05:21.087 10:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1414290 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1414290 ']' 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1414290 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414290 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414290' 00:05:21.087 killing process with pid 1414290 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1414290 00:05:21.087 10:12:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1414290 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1414046 ]] 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1414046 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1414046 ']' 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1414046 00:05:21.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1414046) - No such process 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1414046 is not found' 00:05:21.654 Process with pid 1414046 is not found 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1414290 ]] 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1414290 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1414290 ']' 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1414290 00:05:21.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1414290) - No such process 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1414290 is not found' 00:05:21.654 Process with pid 1414290 is not found 00:05:21.654 10:12:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:21.654 00:05:21.654 real 0m16.002s 00:05:21.654 user 0m28.631s 00:05:21.654 sys 0m5.355s 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.654 10:12:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.654 ************************************ 00:05:21.654 END TEST cpu_locks 00:05:21.654 ************************************ 00:05:21.654 00:05:21.654 real 0m42.160s 00:05:21.654 user 1m21.499s 00:05:21.654 sys 0m9.703s 00:05:21.654 10:12:11 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.654 10:12:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.654 ************************************ 00:05:21.654 END TEST event 00:05:21.654 ************************************ 00:05:21.654 10:12:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:21.654 10:12:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.654 10:12:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.654 10:12:11 -- common/autotest_common.sh@10 -- # set +x 00:05:21.654 ************************************ 00:05:21.654 START TEST thread 00:05:21.654 ************************************ 00:05:21.654 10:12:11 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:21.654 * Looking for test storage... 00:05:21.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:21.654 10:12:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.654 10:12:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:21.654 10:12:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.654 10:12:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.654 ************************************ 00:05:21.654 START TEST thread_poller_perf 00:05:21.654 ************************************ 00:05:21.654 10:12:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.654 [2024-07-25 10:12:11.349880] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:21.654 [2024-07-25 10:12:11.349951] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414890 ] 00:05:21.654 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.654 [2024-07-25 10:12:11.408267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.912 [2024-07-25 10:12:11.525026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.912 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:23.286 ====================================== 00:05:23.286 busy:2710684360 (cyc) 00:05:23.286 total_run_count: 261000 00:05:23.286 tsc_hz: 2700000000 (cyc) 00:05:23.286 ====================================== 00:05:23.286 poller_cost: 10385 (cyc), 3846 (nsec) 00:05:23.286 00:05:23.286 real 0m1.308s 00:05:23.286 user 0m1.235s 00:05:23.286 sys 0m0.066s 00:05:23.286 10:12:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.286 10:12:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.286 ************************************ 00:05:23.286 END TEST thread_poller_perf 00:05:23.286 ************************************ 00:05:23.287 10:12:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.287 10:12:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:23.287 10:12:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.287 10:12:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.287 ************************************ 00:05:23.287 START TEST thread_poller_perf 00:05:23.287 ************************************ 00:05:23.287 10:12:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.287 [2024-07-25 10:12:12.707694] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:23.287 [2024-07-25 10:12:12.707769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415096 ] 00:05:23.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.287 [2024-07-25 10:12:12.767767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.287 [2024-07-25 10:12:12.887621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.287 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:24.659 ====================================== 00:05:24.659 busy:2702653596 (cyc) 00:05:24.659 total_run_count: 3664000 00:05:24.659 tsc_hz: 2700000000 (cyc) 00:05:24.659 ====================================== 00:05:24.659 poller_cost: 737 (cyc), 272 (nsec) 00:05:24.659 00:05:24.659 real 0m1.304s 00:05:24.659 user 0m1.219s 00:05:24.659 sys 0m0.079s 00:05:24.659 10:12:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.659 10:12:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 ************************************ 00:05:24.659 END TEST thread_poller_perf 00:05:24.659 ************************************ 00:05:24.659 10:12:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:24.659 00:05:24.659 real 0m2.767s 00:05:24.659 user 0m2.510s 00:05:24.659 sys 0m0.253s 00:05:24.659 10:12:14 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.659 10:12:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 ************************************ 00:05:24.659 END TEST thread 00:05:24.659 ************************************ 00:05:24.659 10:12:14 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:24.659 10:12:14 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.659 10:12:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.659 10:12:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.659 10:12:14 -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 ************************************ 00:05:24.659 START TEST app_cmdline 00:05:24.659 ************************************ 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.659 * Looking for test storage... 00:05:24.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:24.659 10:12:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:24.659 10:12:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1415266 00:05:24.659 10:12:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1415266 00:05:24.659 10:12:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1415266 ']' 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.659 10:12:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 [2024-07-25 10:12:14.194074] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:24.659 [2024-07-25 10:12:14.194178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415266 ] 00:05:24.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.659 [2024-07-25 10:12:14.254735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.659 [2024-07-25 10:12:14.371331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.917 10:12:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.917 10:12:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:24.917 10:12:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:25.176 { 00:05:25.176 "version": "SPDK v24.09-pre git sha1 a4ac1b549", 00:05:25.176 "fields": { 00:05:25.176 "major": 24, 00:05:25.176 "minor": 9, 00:05:25.176 "patch": 0, 00:05:25.176 "suffix": "-pre", 00:05:25.176 "commit": "a4ac1b549" 00:05:25.176 } 00:05:25.176 } 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:25.176 10:12:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:25.176 10:12:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.742 request: 00:05:25.742 { 00:05:25.742 "method": "env_dpdk_get_mem_stats", 00:05:25.742 "req_id": 1 00:05:25.742 } 00:05:25.742 Got JSON-RPC error response 00:05:25.742 response: 00:05:25.742 { 00:05:25.742 "code": -32601, 00:05:25.742 "message": "Method not found" 00:05:25.742 } 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.742 10:12:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1415266 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1415266 ']' 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1415266 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415266 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415266' 00:05:25.742 killing process with pid 1415266 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 1415266 00:05:25.742 10:12:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 1415266 00:05:26.000 00:05:26.000 real 0m1.517s 00:05:26.000 user 0m1.996s 00:05:26.000 sys 0m0.449s 00:05:26.000 10:12:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.000 10:12:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.000 ************************************ 00:05:26.000 END TEST app_cmdline 00:05:26.000 ************************************ 00:05:26.000 10:12:15 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.000 10:12:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.000 10:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.000 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.000 ************************************ 00:05:26.000 START TEST version 00:05:26.000 ************************************ 00:05:26.000 10:12:15 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.000 * Looking for test storage... 00:05:26.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:26.000 10:12:15 version -- app/version.sh@17 -- # get_header_version major 00:05:26.000 10:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # cut -f2 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.000 10:12:15 version -- app/version.sh@17 -- # major=24 00:05:26.000 10:12:15 version -- app/version.sh@18 -- # get_header_version minor 00:05:26.000 10:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # cut -f2 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.000 10:12:15 version -- app/version.sh@18 -- # minor=9 00:05:26.000 10:12:15 version -- app/version.sh@19 -- # get_header_version patch 00:05:26.000 10:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # cut -f2 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.000 10:12:15 version -- app/version.sh@19 -- # patch=0 00:05:26.000 10:12:15 version -- app/version.sh@20 -- # get_header_version suffix 00:05:26.000 10:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # cut -f2 00:05:26.000 10:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.000 10:12:15 version -- app/version.sh@20 -- # suffix=-pre 00:05:26.001 10:12:15 version -- app/version.sh@22 -- # version=24.9 00:05:26.001 10:12:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:26.001 10:12:15 version -- app/version.sh@28 -- # version=24.9rc0 00:05:26.001 10:12:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:26.001 10:12:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:26.001 10:12:15 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:26.001 10:12:15 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:26.001 00:05:26.001 real 0m0.111s 00:05:26.001 user 0m0.061s 00:05:26.001 sys 0m0.070s 00:05:26.001 10:12:15 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.001 10:12:15 version -- common/autotest_common.sh@10 -- # set +x 00:05:26.001 ************************************ 00:05:26.001 END TEST version 00:05:26.001 ************************************ 00:05:26.259 10:12:15 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@202 -- # uname -s 00:05:26.259 10:12:15 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:26.259 10:12:15 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:26.259 10:12:15 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:26.259 10:12:15 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:26.259 10:12:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.259 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 10:12:15 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:26.259 10:12:15 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:26.259 10:12:15 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.259 10:12:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:26.259 10:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.259 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 ************************************ 00:05:26.259 START TEST nvmf_tcp 00:05:26.259 ************************************ 00:05:26.259 10:12:15 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.259 * Looking for test storage... 00:05:26.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:26.259 10:12:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:26.259 10:12:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:26.259 10:12:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:26.259 10:12:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:26.259 10:12:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.259 10:12:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 ************************************ 00:05:26.259 START TEST nvmf_target_core 00:05:26.259 ************************************ 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:26.259 * Looking for test storage... 00:05:26.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.259 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.260 10:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:26.260 ************************************ 00:05:26.260 START TEST nvmf_abort 00:05:26.260 ************************************ 00:05:26.260 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:26.519 * Looking for test storage... 00:05:26.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:26.519 10:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:27.899 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:27.899 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:27.899 Found net devices under 0000:08:00.0: cvl_0_0 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.899 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:27.900 Found net devices under 0000:08:00.1: cvl_0_1 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:27.900 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:28.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:05:28.158 00:05:28.158 --- 10.0.0.2 ping statistics --- 00:05:28.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.158 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:05:28.158 00:05:28.158 --- 10.0.0.1 ping statistics --- 00:05:28.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.158 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:28.158 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1416788 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1416788 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1416788 ']' 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.159 10:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.159 [2024-07-25 10:12:17.850898] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:28.159 [2024-07-25 10:12:17.851006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.159 [2024-07-25 10:12:17.919333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.417 [2024-07-25 10:12:18.037649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.417 [2024-07-25 10:12:18.037716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.417 [2024-07-25 10:12:18.037732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.417 [2024-07-25 10:12:18.037745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.417 [2024-07-25 10:12:18.037756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.417 [2024-07-25 10:12:18.037847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.417 [2024-07-25 10:12:18.038144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.417 [2024-07-25 10:12:18.038149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.417 [2024-07-25 10:12:18.171523] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.417 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 Malloc0 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 Delay0 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 [2024-07-25 10:12:18.240193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.676 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:28.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.676 [2024-07-25 10:12:18.345454] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:31.207 Initializing NVMe Controllers 00:05:31.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:31.207 controller IO queue size 128 less than required 00:05:31.207 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:31.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:31.207 Initialization complete. Launching workers. 00:05:31.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28499 00:05:31.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28560, failed to submit 62 00:05:31.207 success 28503, unsuccess 57, failed 0 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:31.207 rmmod nvme_tcp 00:05:31.207 rmmod nvme_fabrics 00:05:31.207 rmmod nvme_keyring 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1416788 ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1416788 ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416788' 00:05:31.207 killing process with pid 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1416788 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.207 10:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:33.119 00:05:33.119 real 0m6.801s 00:05:33.119 user 0m10.113s 00:05:33.119 sys 0m2.229s 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.119 ************************************ 00:05:33.119 END TEST nvmf_abort 00:05:33.119 ************************************ 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:33.119 ************************************ 00:05:33.119 START TEST nvmf_ns_hotplug_stress 00:05:33.119 ************************************ 00:05:33.119 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:33.379 * Looking for test storage... 00:05:33.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:33.379 10:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:35.287 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:35.287 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:35.287 Found net devices under 0000:08:00.0: cvl_0_0 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.287 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:35.288 Found net devices under 0000:08:00.1: cvl_0_1 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:35.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:35.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:05:35.288 00:05:35.288 --- 10.0.0.2 ping statistics --- 00:05:35.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.288 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:35.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:35.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:05:35.288 00:05:35.288 --- 10.0.0.1 ping statistics --- 00:05:35.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.288 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1418515 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1418515 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1418515 ']' 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.288 10:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.288 [2024-07-25 10:12:24.772776] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:05:35.288 [2024-07-25 10:12:24.772866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.288 [2024-07-25 10:12:24.841581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.288 [2024-07-25 10:12:24.958280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.288 [2024-07-25 10:12:24.958342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.288 [2024-07-25 10:12:24.958359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.288 [2024-07-25 10:12:24.958372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.288 [2024-07-25 10:12:24.958384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.288 [2024-07-25 10:12:24.958467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.288 [2024-07-25 10:12:24.958755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.288 [2024-07-25 10:12:24.958789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.546 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:35.805 [2024-07-25 10:12:25.369636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.805 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:36.063 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:36.324 [2024-07-25 10:12:25.983646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:36.324 10:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.608 10:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.886 Malloc0 00:05:36.886 10:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:37.144 Delay0 00:05:37.144 10:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.710 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:37.710 NULL1 00:05:37.967 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:38.227 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1418842 00:05:38.227 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:38.227 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:38.227 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.227 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.160 Read completed with error (sct=0, sc=11) 00:05:39.160 10:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.675 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:39.675 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:39.933 true 00:05:39.933 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:39.933 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.499 10:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.064 10:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:41.064 10:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:41.064 true 00:05:41.064 10:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:41.064 10:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.628 10:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.886 10:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:41.886 10:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:42.143 true 00:05:42.143 10:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:42.143 10:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.401 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.659 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:42.659 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:42.917 true 00:05:42.917 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:42.917 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.175 10:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.433 10:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:43.433 10:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:43.998 true 00:05:43.998 10:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:43.998 10:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.563 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.129 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:45.129 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:45.386 true 00:05:45.386 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:45.386 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.644 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.903 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:45.903 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:46.161 true 00:05:46.161 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:46.161 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.419 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.677 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:46.677 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:47.243 true 00:05:47.243 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:47.243 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.809 10:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.067 10:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:48.067 10:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:48.325 true 00:05:48.325 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:48.325 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.891 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.891 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:48.891 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:49.457 true 00:05:49.457 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:49.457 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.714 10:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.972 10:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:49.972 10:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:50.233 true 00:05:50.233 10:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:50.233 10:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.166 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.166 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:51.166 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:51.424 true 00:05:51.682 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:51.682 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.940 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.197 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:52.197 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:52.455 true 00:05:52.455 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:52.455 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.713 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.971 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:52.971 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:53.228 true 00:05:53.228 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:53.228 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.157 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.414 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:54.414 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:54.671 true 00:05:54.671 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:54.672 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.928 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.185 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:55.185 10:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:55.442 true 00:05:55.442 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:55.442 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.006 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.263 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:56.263 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:56.520 true 00:05:56.520 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:56.520 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.084 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.598 10:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:57.598 10:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:57.855 true 00:05:57.855 10:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:57.855 10:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.112 10:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.369 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:58.369 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:58.626 true 00:05:58.626 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:58.626 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.883 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.479 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:59.479 10:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:59.479 true 00:05:59.479 10:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:05:59.479 10:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.045 10:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.045 10:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:00.045 10:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:00.303 true 00:06:00.303 10:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:00.303 10:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.246 10:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.505 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:01.505 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:01.763 true 00:06:01.763 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:01.763 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.329 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.329 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:02.329 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:02.587 true 00:06:02.587 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:02.587 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.845 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.104 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:03.104 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:03.362 true 00:06:03.362 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:03.362 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.735 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:04.735 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:04.993 true 00:06:04.993 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:04.993 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.928 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.186 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:06.186 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:06.444 true 00:06:06.444 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:06.444 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.702 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.960 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:06.960 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:07.217 true 00:06:07.217 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:07.217 10:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.474 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.732 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:07.732 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:07.990 true 00:06:07.990 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:07.990 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.925 10:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.925 Initializing NVMe Controllers 00:06:08.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.926 Controller IO queue size 128, less than required. 00:06:08.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:08.926 Controller IO queue size 128, less than required. 00:06:08.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:08.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:08.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:08.926 Initialization complete. Launching workers. 00:06:08.926 ======================================================== 00:06:08.926 Latency(us) 00:06:08.926 Device Information : IOPS MiB/s Average min max 00:06:08.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 724.03 0.35 73333.45 2789.10 1109733.44 00:06:08.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7087.13 3.46 18061.52 4792.57 535130.20 00:06:08.926 ======================================================== 00:06:08.926 Total : 7811.15 3.81 23184.77 2789.10 1109733.44 00:06:08.926 00:06:09.184 10:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:09.184 10:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:09.442 true 00:06:09.442 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1418842 00:06:09.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1418842) - No such process 00:06:09.442 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1418842 00:06:09.442 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.700 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.958 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:09.958 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:09.958 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:09.958 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.958 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:10.215 null0 00:06:10.474 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.474 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.474 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:10.732 null1 00:06:10.732 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.732 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.732 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:10.990 null2 00:06:10.990 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.990 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.990 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:11.248 null3 00:06:11.248 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.248 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.248 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:11.506 null4 00:06:11.506 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.506 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.506 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:11.764 null5 00:06:11.764 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.764 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.764 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:12.023 null6 00:06:12.023 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.023 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.023 10:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:12.589 null7 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.589 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1422135 1422136 1422138 1422140 1422142 1422144 1422146 1422148 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.590 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.848 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.106 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.107 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.365 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.365 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.624 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.881 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.139 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.397 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.397 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.397 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.397 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.397 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.397 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.397 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.655 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.656 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.914 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.172 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.430 10:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.430 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.430 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.430 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.430 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.430 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.688 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.946 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.206 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.464 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.722 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.981 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.241 10:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.500 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.759 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.019 rmmod nvme_tcp 00:06:18.019 rmmod nvme_fabrics 00:06:18.019 rmmod nvme_keyring 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1418515 ']' 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1418515 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1418515 ']' 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1418515 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418515 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418515' 00:06:18.019 killing process with pid 1418515 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1418515 00:06:18.019 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1418515 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.280 10:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:20.820 00:06:20.820 real 0m47.159s 00:06:20.820 user 3m39.476s 00:06:20.820 sys 0m15.131s 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.820 ************************************ 00:06:20.820 END TEST nvmf_ns_hotplug_stress 00:06:20.820 ************************************ 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.820 ************************************ 00:06:20.820 START TEST nvmf_delete_subsystem 00:06:20.820 ************************************ 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:20.820 * Looking for test storage... 00:06:20.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:20.820 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:20.821 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.821 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:22.201 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.201 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:22.201 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:22.202 Found net devices under 0000:08:00.0: cvl_0_0 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:22.202 Found net devices under 0000:08:00.1: cvl_0_1 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:06:22.202 00:06:22.202 --- 10.0.0.2 ping statistics --- 00:06:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.202 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:06:22.202 00:06:22.202 --- 10.0.0.1 ping statistics --- 00:06:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.202 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:22.202 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1424298 00:06:22.460 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1424298 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1424298 ']' 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.461 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.461 [2024-07-25 10:13:12.046282] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:06:22.461 [2024-07-25 10:13:12.046377] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.461 [2024-07-25 10:13:12.113567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.461 [2024-07-25 10:13:12.232813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.461 [2024-07-25 10:13:12.232879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.461 [2024-07-25 10:13:12.232896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.461 [2024-07-25 10:13:12.232908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.461 [2024-07-25 10:13:12.232920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.461 [2024-07-25 10:13:12.236503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.461 [2024-07-25 10:13:12.236548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 [2024-07-25 10:13:12.370092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 [2024-07-25 10:13:12.386280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 NULL1 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 Delay0 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1424335 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:22.719 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:22.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.719 [2024-07-25 10:13:12.471044] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.664 10:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.664 10:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.664 10:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 [2024-07-25 10:13:14.687304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ddc0 is same with the state(5) to be set 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 starting I/O failed: -6 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 [2024-07-25 10:13:14.688161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9eac00d490 is same with the state(5) to be set 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Write completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.923 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Write completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:24.924 Read completed with error (sct=0, sc=8) 00:06:26.296 [2024-07-25 10:13:15.648267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215d600 is same with the state(5) to be set 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 [2024-07-25 10:13:15.688025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9eac00d000 is same with the state(5) to be set 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 [2024-07-25 10:13:15.688293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180130 is same with the state(5) to be set 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 [2024-07-25 10:13:15.690082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dfa0 is same with the state(5) to be set 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 Read completed with error (sct=0, sc=8) 00:06:26.296 Write completed with error (sct=0, sc=8) 00:06:26.296 [2024-07-25 10:13:15.690270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9eac00d7c0 is same with the state(5) to be set 00:06:26.296 Initializing NVMe Controllers 00:06:26.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.296 Controller IO queue size 128, less than required. 00:06:26.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:26.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:26.296 Initialization complete. Launching workers. 00:06:26.296 ======================================================== 00:06:26.296 Latency(us) 00:06:26.296 Device Information : IOPS MiB/s Average min max 00:06:26.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.59 0.08 897977.87 771.00 1013584.22 00:06:26.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.65 0.08 912133.39 420.74 1014696.01 00:06:26.296 ======================================================== 00:06:26.296 Total : 332.25 0.16 904907.74 420.74 1014696.01 00:06:26.296 00:06:26.296 [2024-07-25 10:13:15.691187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215d600 (9): Bad file descriptor 00:06:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:26.296 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.296 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:26.296 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1424335 00:06:26.296 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1424335 00:06:26.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1424335) - No such process 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1424335 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1424335 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1424335 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 [2024-07-25 10:13:16.216392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1424730 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.554 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:26.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.554 [2024-07-25 10:13:16.289030] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:27.120 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.120 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:27.120 10:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.685 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.685 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:27.685 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.250 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.250 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:28.250 10:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.508 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.508 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:28.508 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.073 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.073 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:29.073 10:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.639 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.639 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:29.639 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.897 Initializing NVMe Controllers 00:06:29.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.897 Controller IO queue size 128, less than required. 00:06:29.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.897 Initialization complete. Launching workers. 00:06:29.897 ======================================================== 00:06:29.897 Latency(us) 00:06:29.897 Device Information : IOPS MiB/s Average min max 00:06:29.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004246.56 1000250.40 1013069.13 00:06:29.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005522.85 1000197.78 1042184.52 00:06:29.897 ======================================================== 00:06:29.897 Total : 256.00 0.12 1004884.71 1000197.78 1042184.52 00:06:29.897 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1424730 00:06:30.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1424730) - No such process 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1424730 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:30.156 rmmod nvme_tcp 00:06:30.156 rmmod nvme_fabrics 00:06:30.156 rmmod nvme_keyring 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1424298 ']' 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1424298 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1424298 ']' 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1424298 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1424298 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1424298' 00:06:30.156 killing process with pid 1424298 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1424298 00:06:30.156 10:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1424298 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.415 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:32.956 00:06:32.956 real 0m12.047s 00:06:32.956 user 0m28.057s 00:06:32.956 sys 0m2.715s 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 ************************************ 00:06:32.956 END TEST nvmf_delete_subsystem 00:06:32.956 ************************************ 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 ************************************ 00:06:32.956 START TEST nvmf_host_management 00:06:32.956 ************************************ 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.956 * Looking for test storage... 00:06:32.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.956 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.957 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.335 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:34.336 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:34.336 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:34.336 Found net devices under 0000:08:00.0: cvl_0_0 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:34.336 Found net devices under 0000:08:00.1: cvl_0_1 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.336 10:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.336 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.336 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.336 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:34.336 00:06:34.336 --- 10.0.0.2 ping statistics --- 00:06:34.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.336 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:34.336 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:06:34.336 00:06:34.336 --- 10.0.0.1 ping statistics --- 00:06:34.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.336 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1426543 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1426543 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1426543 ']' 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.337 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 [2024-07-25 10:13:24.127640] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:06:34.595 [2024-07-25 10:13:24.127740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.595 [2024-07-25 10:13:24.196608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.595 [2024-07-25 10:13:24.317781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.595 [2024-07-25 10:13:24.317844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.595 [2024-07-25 10:13:24.317860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.595 [2024-07-25 10:13:24.317873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.595 [2024-07-25 10:13:24.317884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.595 [2024-07-25 10:13:24.318284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.595 [2024-07-25 10:13:24.318337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.595 [2024-07-25 10:13:24.318387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.595 [2024-07-25 10:13:24.318391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 [2024-07-25 10:13:24.469792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 Malloc0 00:06:34.853 [2024-07-25 10:13:24.532293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1426592 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1426592 /var/tmp/bdevperf.sock 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1426592 ']' 00:06:34.853 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:34.854 { 00:06:34.854 "params": { 00:06:34.854 "name": "Nvme$subsystem", 00:06:34.854 "trtype": "$TEST_TRANSPORT", 00:06:34.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.854 "adrfam": "ipv4", 00:06:34.854 "trsvcid": "$NVMF_PORT", 00:06:34.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.854 "hdgst": ${hdgst:-false}, 00:06:34.854 "ddgst": ${ddgst:-false} 00:06:34.854 }, 00:06:34.854 "method": "bdev_nvme_attach_controller" 00:06:34.854 } 00:06:34.854 EOF 00:06:34.854 )") 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:34.854 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:34.854 "params": { 00:06:34.854 "name": "Nvme0", 00:06:34.854 "trtype": "tcp", 00:06:34.854 "traddr": "10.0.0.2", 00:06:34.854 "adrfam": "ipv4", 00:06:34.854 "trsvcid": "4420", 00:06:34.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.854 "hdgst": false, 00:06:34.854 "ddgst": false 00:06:34.854 }, 00:06:34.854 "method": "bdev_nvme_attach_controller" 00:06:34.854 }' 00:06:34.854 [2024-07-25 10:13:24.616605] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:06:34.854 [2024-07-25 10:13:24.616697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426592 ] 00:06:35.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.112 [2024-07-25 10:13:24.677445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.112 [2024-07-25 10:13:24.794185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.370 Running I/O for 10 seconds... 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:35.370 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.627 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=462 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 462 -ge 100 ']' 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.887 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.887 [2024-07-25 10:13:25.423361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.423996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.424009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.424022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.424034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.887 [2024-07-25 10:13:25.424047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 [2024-07-25 10:13:25.424349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf500 is same with the state(5) to be set 00:06:35.888 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.888 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.888 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.888 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.888 [2024-07-25 10:13:25.431039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.888 [2024-07-25 10:13:25.431944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.888 [2024-07-25 10:13:25.431959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.431976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.431991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.432982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.432999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.889 [2024-07-25 10:13:25.433172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.889 [2024-07-25 10:13:25.433250] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12653d0 was disconnected and freed. reset controller. 00:06:35.889 [2024-07-25 10:13:25.433328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.890 [2024-07-25 10:13:25.433350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.890 [2024-07-25 10:13:25.433371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.890 [2024-07-25 10:13:25.433386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.890 [2024-07-25 10:13:25.433401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.890 [2024-07-25 10:13:25.433415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.890 [2024-07-25 10:13:25.433430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.890 [2024-07-25 10:13:25.433445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.890 [2024-07-25 10:13:25.433459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe338d0 is same with the state(5) to be set 00:06:35.890 [2024-07-25 10:13:25.434776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:35.890 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.890 10:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:35.890 task offset: 67968 on job bdev=Nvme0n1 fails 00:06:35.890 00:06:35.890 Latency(us) 00:06:35.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.890 Job: Nvme0n1 ended in about 0.43 seconds with error 00:06:35.890 Verification LBA range: start 0x0 length 0x400 00:06:35.890 Nvme0n1 : 0.43 1247.79 77.99 150.39 0.00 44262.14 2864.17 39612.87 00:06:35.890 =================================================================================================================== 00:06:35.890 Total : 1247.79 77.99 150.39 0.00 44262.14 2864.17 39612.87 00:06:35.890 [2024-07-25 10:13:25.437096] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.890 [2024-07-25 10:13:25.437128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe338d0 (9): Bad file descriptor 00:06:35.890 [2024-07-25 10:13:25.448949] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1426592 00:06:36.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1426592) - No such process 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:36.822 { 00:06:36.822 "params": { 00:06:36.822 "name": "Nvme$subsystem", 00:06:36.822 "trtype": "$TEST_TRANSPORT", 00:06:36.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.822 "adrfam": "ipv4", 00:06:36.822 "trsvcid": "$NVMF_PORT", 00:06:36.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.822 "hdgst": ${hdgst:-false}, 00:06:36.822 "ddgst": ${ddgst:-false} 00:06:36.822 }, 00:06:36.822 "method": "bdev_nvme_attach_controller" 00:06:36.822 } 00:06:36.822 EOF 00:06:36.822 )") 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:36.822 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:36.822 "params": { 00:06:36.822 "name": "Nvme0", 00:06:36.822 "trtype": "tcp", 00:06:36.822 "traddr": "10.0.0.2", 00:06:36.822 "adrfam": "ipv4", 00:06:36.822 "trsvcid": "4420", 00:06:36.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.822 "hdgst": false, 00:06:36.822 "ddgst": false 00:06:36.822 }, 00:06:36.822 "method": "bdev_nvme_attach_controller" 00:06:36.822 }' 00:06:36.822 [2024-07-25 10:13:26.488809] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:06:36.822 [2024-07-25 10:13:26.488904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426804 ] 00:06:36.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.822 [2024-07-25 10:13:26.551282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.080 [2024-07-25 10:13:26.671514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.338 Running I/O for 1 seconds... 00:06:38.270 00:06:38.270 Latency(us) 00:06:38.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.270 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.270 Verification LBA range: start 0x0 length 0x400 00:06:38.270 Nvme0n1 : 1.02 1476.46 92.28 0.00 0.00 42319.58 3762.25 37476.88 00:06:38.270 =================================================================================================================== 00:06:38.270 Total : 1476.46 92.28 0.00 0.00 42319.58 3762.25 37476.88 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:38.528 rmmod nvme_tcp 00:06:38.528 rmmod nvme_fabrics 00:06:38.528 rmmod nvme_keyring 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1426543 ']' 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1426543 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1426543 ']' 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1426543 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1426543 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1426543' 00:06:38.528 killing process with pid 1426543 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1426543 00:06:38.528 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1426543 00:06:38.787 [2024-07-25 10:13:28.431775] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.787 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:41.328 00:06:41.328 real 0m8.346s 00:06:41.328 user 0m19.472s 00:06:41.328 sys 0m2.391s 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.328 ************************************ 00:06:41.328 END TEST nvmf_host_management 00:06:41.328 ************************************ 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.328 ************************************ 00:06:41.328 START TEST nvmf_lvol 00:06:41.328 ************************************ 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.328 * Looking for test storage... 00:06:41.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.328 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:41.329 10:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.707 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:42.708 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:42.708 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:42.708 Found net devices under 0000:08:00.0: cvl_0_0 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:42.708 Found net devices under 0000:08:00.1: cvl_0_1 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:06:42.708 00:06:42.708 --- 10.0.0.2 ping statistics --- 00:06:42.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.708 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:06:42.708 00:06:42.708 --- 10.0.0.1 ping statistics --- 00:06:42.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.708 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.708 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1428418 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1428418 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1428418 ']' 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.709 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.968 [2024-07-25 10:13:32.515629] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:06:42.968 [2024-07-25 10:13:32.515724] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.968 [2024-07-25 10:13:32.582432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.968 [2024-07-25 10:13:32.702936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.968 [2024-07-25 10:13:32.703003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.968 [2024-07-25 10:13:32.703024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.968 [2024-07-25 10:13:32.703037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.968 [2024-07-25 10:13:32.703049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.968 [2024-07-25 10:13:32.703127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.968 [2024-07-25 10:13:32.706501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.968 [2024-07-25 10:13:32.706550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.226 10:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:43.492 [2024-07-25 10:13:33.113703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.492 10:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:43.756 10:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:43.756 10:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.013 10:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:44.013 10:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:44.578 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:44.850 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=74250409-002d-4191-be54-671763b01850 00:06:44.850 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 74250409-002d-4191-be54-671763b01850 lvol 20 00:06:45.107 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=eff5a895-9996-48f5-9599-942ec617412f 00:06:45.107 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.365 10:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eff5a895-9996-48f5-9599-942ec617412f 00:06:45.623 10:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.881 [2024-07-25 10:13:35.515301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.881 10:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.139 10:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1428759 00:06:46.139 10:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:46.139 10:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:46.139 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.074 10:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot eff5a895-9996-48f5-9599-942ec617412f MY_SNAPSHOT 00:06:47.640 10:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=68d5e63b-095d-4524-9395-896941d2f85d 00:06:47.640 10:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize eff5a895-9996-48f5-9599-942ec617412f 30 00:06:47.898 10:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 68d5e63b-095d-4524-9395-896941d2f85d MY_CLONE 00:06:48.156 10:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e8ef7ef7-97bc-45e7-8954-4c272be34129 00:06:48.156 10:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e8ef7ef7-97bc-45e7-8954-4c272be34129 00:06:49.091 10:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1428759 00:06:57.235 Initializing NVMe Controllers 00:06:57.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:57.235 Controller IO queue size 128, less than required. 00:06:57.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:57.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:57.235 Initialization complete. Launching workers. 00:06:57.235 ======================================================== 00:06:57.235 Latency(us) 00:06:57.235 Device Information : IOPS MiB/s Average min max 00:06:57.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9857.29 38.51 12996.49 1463.46 80496.91 00:06:57.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9809.70 38.32 13050.95 2225.33 81674.70 00:06:57.235 ======================================================== 00:06:57.235 Total : 19666.99 76.82 13023.65 1463.46 81674.70 00:06:57.235 00:06:57.235 10:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:57.235 10:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eff5a895-9996-48f5-9599-942ec617412f 00:06:57.235 10:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74250409-002d-4191-be54-671763b01850 00:06:57.494 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.495 rmmod nvme_tcp 00:06:57.495 rmmod nvme_fabrics 00:06:57.495 rmmod nvme_keyring 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1428418 ']' 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1428418 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1428418 ']' 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1428418 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1428418 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1428418' 00:06:57.495 killing process with pid 1428418 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1428418 00:06:57.495 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1428418 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.755 10:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.661 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:59.661 00:06:59.661 real 0m18.864s 00:06:59.661 user 1m6.049s 00:06:59.661 sys 0m5.109s 00:06:59.661 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.661 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.661 ************************************ 00:06:59.661 END TEST nvmf_lvol 00:06:59.661 ************************************ 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.920 ************************************ 00:06:59.920 START TEST nvmf_lvs_grow 00:06:59.920 ************************************ 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.920 * Looking for test storage... 00:06:59.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.920 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.921 10:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.859 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:01.860 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:01.860 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:01.860 Found net devices under 0000:08:00.0: cvl_0_0 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:01.860 Found net devices under 0000:08:00.1: cvl_0_1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:07:01.860 00:07:01.860 --- 10.0.0.2 ping statistics --- 00:07:01.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.860 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:01.860 00:07:01.860 --- 10.0.0.1 ping statistics --- 00:07:01.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.860 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1431300 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1431300 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1431300 ']' 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.860 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.861 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.861 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.861 [2024-07-25 10:13:51.401508] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:01.861 [2024-07-25 10:13:51.401597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.861 [2024-07-25 10:13:51.465526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.861 [2024-07-25 10:13:51.580970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.861 [2024-07-25 10:13:51.581038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.861 [2024-07-25 10:13:51.581053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.861 [2024-07-25 10:13:51.581066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.861 [2024-07-25 10:13:51.581078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.861 [2024-07-25 10:13:51.581116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.119 10:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.377 [2024-07-25 10:13:51.990132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.377 ************************************ 00:07:02.377 START TEST lvs_grow_clean 00:07:02.377 ************************************ 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.377 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.635 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:02.635 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.893 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:02.893 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:02.893 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:03.458 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:03.458 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:03.458 10:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 580eda6d-dd24-4372-bc5f-ce4135e920ed lvol 150 00:07:03.715 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f523dd25-4510-4f29-a888-944a51662a58 00:07:03.715 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.715 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:03.715 [2024-07-25 10:13:53.482041] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:03.715 [2024-07-25 10:13:53.482115] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:03.715 true 00:07:03.972 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:03.972 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:04.229 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:04.229 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.229 10:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f523dd25-4510-4f29-a888-944a51662a58 00:07:04.486 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.743 [2024-07-25 10:13:54.465102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.743 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1431716 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1431716 /var/tmp/bdevperf.sock 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1431716 ']' 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.000 10:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:05.258 [2024-07-25 10:13:54.785755] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:05.258 [2024-07-25 10:13:54.785848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431716 ] 00:07:05.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.258 [2024-07-25 10:13:54.840089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.258 [2024-07-25 10:13:54.957783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.517 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.517 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:05.517 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:05.774 Nvme0n1 00:07:05.774 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:06.031 [ 00:07:06.031 { 00:07:06.031 "name": "Nvme0n1", 00:07:06.031 "aliases": [ 00:07:06.031 "f523dd25-4510-4f29-a888-944a51662a58" 00:07:06.031 ], 00:07:06.031 "product_name": "NVMe disk", 00:07:06.031 "block_size": 4096, 00:07:06.031 "num_blocks": 38912, 00:07:06.031 "uuid": "f523dd25-4510-4f29-a888-944a51662a58", 00:07:06.031 "assigned_rate_limits": { 00:07:06.031 "rw_ios_per_sec": 0, 00:07:06.031 "rw_mbytes_per_sec": 0, 00:07:06.032 "r_mbytes_per_sec": 0, 00:07:06.032 "w_mbytes_per_sec": 0 00:07:06.032 }, 00:07:06.032 "claimed": false, 00:07:06.032 "zoned": false, 00:07:06.032 "supported_io_types": { 00:07:06.032 "read": true, 00:07:06.032 "write": true, 00:07:06.032 "unmap": true, 00:07:06.032 "flush": true, 00:07:06.032 "reset": true, 00:07:06.032 "nvme_admin": true, 00:07:06.032 "nvme_io": true, 00:07:06.032 "nvme_io_md": false, 00:07:06.032 "write_zeroes": true, 00:07:06.032 "zcopy": false, 00:07:06.032 "get_zone_info": false, 00:07:06.032 "zone_management": false, 00:07:06.032 "zone_append": false, 00:07:06.032 "compare": true, 00:07:06.032 "compare_and_write": true, 00:07:06.032 "abort": true, 00:07:06.032 "seek_hole": false, 00:07:06.032 "seek_data": false, 00:07:06.032 "copy": true, 00:07:06.032 "nvme_iov_md": false 00:07:06.032 }, 00:07:06.032 "memory_domains": [ 00:07:06.032 { 00:07:06.032 "dma_device_id": "system", 00:07:06.032 "dma_device_type": 1 00:07:06.032 } 00:07:06.032 ], 00:07:06.032 "driver_specific": { 00:07:06.032 "nvme": [ 00:07:06.032 { 00:07:06.032 "trid": { 00:07:06.032 "trtype": "TCP", 00:07:06.032 "adrfam": "IPv4", 00:07:06.032 "traddr": "10.0.0.2", 00:07:06.032 "trsvcid": "4420", 00:07:06.032 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:06.032 }, 00:07:06.032 "ctrlr_data": { 00:07:06.032 "cntlid": 1, 00:07:06.032 "vendor_id": "0x8086", 00:07:06.032 "model_number": "SPDK bdev Controller", 00:07:06.032 "serial_number": "SPDK0", 00:07:06.032 "firmware_revision": "24.09", 00:07:06.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.032 "oacs": { 00:07:06.032 "security": 0, 00:07:06.032 "format": 0, 00:07:06.032 "firmware": 0, 00:07:06.032 "ns_manage": 0 00:07:06.032 }, 00:07:06.032 "multi_ctrlr": true, 00:07:06.032 "ana_reporting": false 00:07:06.032 }, 00:07:06.032 "vs": { 00:07:06.032 "nvme_version": "1.3" 00:07:06.032 }, 00:07:06.032 "ns_data": { 00:07:06.032 "id": 1, 00:07:06.032 "can_share": true 00:07:06.032 } 00:07:06.032 } 00:07:06.032 ], 00:07:06.032 "mp_policy": "active_passive" 00:07:06.032 } 00:07:06.032 } 00:07:06.032 ] 00:07:06.032 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1431735 00:07:06.032 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.032 10:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:06.032 Running I/O for 10 seconds... 00:07:07.403 Latency(us) 00:07:07.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.403 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:07.403 =================================================================================================================== 00:07:07.403 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:07.403 00:07:07.967 10:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:08.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.225 Nvme0n1 : 2.00 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:07:08.225 =================================================================================================================== 00:07:08.225 Total : 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:07:08.225 00:07:08.225 true 00:07:08.225 10:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:08.225 10:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:08.792 10:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:08.792 10:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:08.792 10:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1431735 00:07:09.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.051 Nvme0n1 : 3.00 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:07:09.051 =================================================================================================================== 00:07:09.051 Total : 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:07:09.051 00:07:10.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.427 Nvme0n1 : 4.00 14097.50 55.07 0.00 0.00 0.00 0.00 0.00 00:07:10.427 =================================================================================================================== 00:07:10.427 Total : 14097.50 55.07 0.00 0.00 0.00 0.00 0.00 00:07:10.427 00:07:11.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.358 Nvme0n1 : 5.00 14123.40 55.17 0.00 0.00 0.00 0.00 0.00 00:07:11.358 =================================================================================================================== 00:07:11.358 Total : 14123.40 55.17 0.00 0.00 0.00 0.00 0.00 00:07:11.358 00:07:12.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.288 Nvme0n1 : 6.00 14161.33 55.32 0.00 0.00 0.00 0.00 0.00 00:07:12.288 =================================================================================================================== 00:07:12.288 Total : 14161.33 55.32 0.00 0.00 0.00 0.00 0.00 00:07:12.288 00:07:13.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.218 Nvme0n1 : 7.00 14198.14 55.46 0.00 0.00 0.00 0.00 0.00 00:07:13.218 =================================================================================================================== 00:07:13.218 Total : 14198.14 55.46 0.00 0.00 0.00 0.00 0.00 00:07:13.218 00:07:14.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.150 Nvme0n1 : 8.00 14217.25 55.54 0.00 0.00 0.00 0.00 0.00 00:07:14.150 =================================================================================================================== 00:07:14.150 Total : 14217.25 55.54 0.00 0.00 0.00 0.00 0.00 00:07:14.150 00:07:15.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.082 Nvme0n1 : 9.00 14246.22 55.65 0.00 0.00 0.00 0.00 0.00 00:07:15.082 =================================================================================================================== 00:07:15.082 Total : 14246.22 55.65 0.00 0.00 0.00 0.00 0.00 00:07:15.082 00:07:16.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.015 Nvme0n1 : 10.00 14256.70 55.69 0.00 0.00 0.00 0.00 0.00 00:07:16.015 =================================================================================================================== 00:07:16.015 Total : 14256.70 55.69 0.00 0.00 0.00 0.00 0.00 00:07:16.015 00:07:16.015 00:07:16.015 Latency(us) 00:07:16.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.015 Nvme0n1 : 10.00 14264.01 55.72 0.00 0.00 8968.34 5534.15 17476.27 00:07:16.015 =================================================================================================================== 00:07:16.015 Total : 14264.01 55.72 0.00 0.00 8968.34 5534.15 17476.27 00:07:16.015 0 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1431716 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1431716 ']' 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1431716 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431716 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431716' 00:07:16.273 killing process with pid 1431716 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1431716 00:07:16.273 Received shutdown signal, test time was about 10.000000 seconds 00:07:16.273 00:07:16.273 Latency(us) 00:07:16.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.273 =================================================================================================================== 00:07:16.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.273 10:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1431716 00:07:16.273 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.838 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.095 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:17.095 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:17.353 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:17.353 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:17.353 10:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.611 [2024-07-25 10:14:07.132285] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.611 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:17.868 request: 00:07:17.868 { 00:07:17.868 "uuid": "580eda6d-dd24-4372-bc5f-ce4135e920ed", 00:07:17.868 "method": "bdev_lvol_get_lvstores", 00:07:17.868 "req_id": 1 00:07:17.868 } 00:07:17.868 Got JSON-RPC error response 00:07:17.868 response: 00:07:17.868 { 00:07:17.868 "code": -19, 00:07:17.868 "message": "No such device" 00:07:17.868 } 00:07:17.868 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:17.868 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.868 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.868 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.868 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.126 aio_bdev 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f523dd25-4510-4f29-a888-944a51662a58 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f523dd25-4510-4f29-a888-944a51662a58 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.126 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:18.384 10:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f523dd25-4510-4f29-a888-944a51662a58 -t 2000 00:07:18.642 [ 00:07:18.642 { 00:07:18.642 "name": "f523dd25-4510-4f29-a888-944a51662a58", 00:07:18.642 "aliases": [ 00:07:18.642 "lvs/lvol" 00:07:18.642 ], 00:07:18.642 "product_name": "Logical Volume", 00:07:18.642 "block_size": 4096, 00:07:18.642 "num_blocks": 38912, 00:07:18.642 "uuid": "f523dd25-4510-4f29-a888-944a51662a58", 00:07:18.642 "assigned_rate_limits": { 00:07:18.642 "rw_ios_per_sec": 0, 00:07:18.642 "rw_mbytes_per_sec": 0, 00:07:18.642 "r_mbytes_per_sec": 0, 00:07:18.642 "w_mbytes_per_sec": 0 00:07:18.642 }, 00:07:18.642 "claimed": false, 00:07:18.642 "zoned": false, 00:07:18.642 "supported_io_types": { 00:07:18.642 "read": true, 00:07:18.642 "write": true, 00:07:18.642 "unmap": true, 00:07:18.642 "flush": false, 00:07:18.642 "reset": true, 00:07:18.642 "nvme_admin": false, 00:07:18.642 "nvme_io": false, 00:07:18.642 "nvme_io_md": false, 00:07:18.642 "write_zeroes": true, 00:07:18.642 "zcopy": false, 00:07:18.642 "get_zone_info": false, 00:07:18.642 "zone_management": false, 00:07:18.642 "zone_append": false, 00:07:18.642 "compare": false, 00:07:18.642 "compare_and_write": false, 00:07:18.642 "abort": false, 00:07:18.642 "seek_hole": true, 00:07:18.642 "seek_data": true, 00:07:18.642 "copy": false, 00:07:18.642 "nvme_iov_md": false 00:07:18.642 }, 00:07:18.642 "driver_specific": { 00:07:18.642 "lvol": { 00:07:18.642 "lvol_store_uuid": "580eda6d-dd24-4372-bc5f-ce4135e920ed", 00:07:18.642 "base_bdev": "aio_bdev", 00:07:18.642 "thin_provision": false, 00:07:18.642 "num_allocated_clusters": 38, 00:07:18.642 "snapshot": false, 00:07:18.642 "clone": false, 00:07:18.642 "esnap_clone": false 00:07:18.642 } 00:07:18.642 } 00:07:18.642 } 00:07:18.642 ] 00:07:18.642 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:18.642 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:18.642 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:18.900 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:18.900 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:18.900 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:19.158 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:19.158 10:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f523dd25-4510-4f29-a888-944a51662a58 00:07:19.415 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 580eda6d-dd24-4372-bc5f-ce4135e920ed 00:07:19.689 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.992 00:07:19.992 real 0m17.688s 00:07:19.992 user 0m17.130s 00:07:19.992 sys 0m1.889s 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 ************************************ 00:07:19.992 END TEST lvs_grow_clean 00:07:19.992 ************************************ 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.992 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.250 ************************************ 00:07:20.250 START TEST lvs_grow_dirty 00:07:20.250 ************************************ 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.250 10:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.508 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:20.508 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:20.767 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:20.767 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:20.767 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:21.025 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:21.025 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:21.025 10:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb lvol 150 00:07:21.283 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:21.283 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.283 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:21.541 [2024-07-25 10:14:11.281273] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:21.541 [2024-07-25 10:14:11.281356] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:21.541 true 00:07:21.541 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:21.541 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:22.106 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:22.106 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.365 10:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:22.623 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:22.881 [2024-07-25 10:14:12.476904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.881 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1433394 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1433394 /var/tmp/bdevperf.sock 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1433394 ']' 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.140 10:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.140 [2024-07-25 10:14:12.846978] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:23.140 [2024-07-25 10:14:12.847086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433394 ] 00:07:23.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.140 [2024-07-25 10:14:12.910973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.398 [2024-07-25 10:14:13.030460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.398 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.398 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:23.398 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:23.964 Nvme0n1 00:07:23.964 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:24.221 [ 00:07:24.221 { 00:07:24.221 "name": "Nvme0n1", 00:07:24.221 "aliases": [ 00:07:24.221 "d2db731a-0c64-4e64-8b26-c20e8a882b2d" 00:07:24.221 ], 00:07:24.221 "product_name": "NVMe disk", 00:07:24.221 "block_size": 4096, 00:07:24.221 "num_blocks": 38912, 00:07:24.221 "uuid": "d2db731a-0c64-4e64-8b26-c20e8a882b2d", 00:07:24.221 "assigned_rate_limits": { 00:07:24.221 "rw_ios_per_sec": 0, 00:07:24.222 "rw_mbytes_per_sec": 0, 00:07:24.222 "r_mbytes_per_sec": 0, 00:07:24.222 "w_mbytes_per_sec": 0 00:07:24.222 }, 00:07:24.222 "claimed": false, 00:07:24.222 "zoned": false, 00:07:24.222 "supported_io_types": { 00:07:24.222 "read": true, 00:07:24.222 "write": true, 00:07:24.222 "unmap": true, 00:07:24.222 "flush": true, 00:07:24.222 "reset": true, 00:07:24.222 "nvme_admin": true, 00:07:24.222 "nvme_io": true, 00:07:24.222 "nvme_io_md": false, 00:07:24.222 "write_zeroes": true, 00:07:24.222 "zcopy": false, 00:07:24.222 "get_zone_info": false, 00:07:24.222 "zone_management": false, 00:07:24.222 "zone_append": false, 00:07:24.222 "compare": true, 00:07:24.222 "compare_and_write": true, 00:07:24.222 "abort": true, 00:07:24.222 "seek_hole": false, 00:07:24.222 "seek_data": false, 00:07:24.222 "copy": true, 00:07:24.222 "nvme_iov_md": false 00:07:24.222 }, 00:07:24.222 "memory_domains": [ 00:07:24.222 { 00:07:24.222 "dma_device_id": "system", 00:07:24.222 "dma_device_type": 1 00:07:24.222 } 00:07:24.222 ], 00:07:24.222 "driver_specific": { 00:07:24.222 "nvme": [ 00:07:24.222 { 00:07:24.222 "trid": { 00:07:24.222 "trtype": "TCP", 00:07:24.222 "adrfam": "IPv4", 00:07:24.222 "traddr": "10.0.0.2", 00:07:24.222 "trsvcid": "4420", 00:07:24.222 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:24.222 }, 00:07:24.222 "ctrlr_data": { 00:07:24.222 "cntlid": 1, 00:07:24.222 "vendor_id": "0x8086", 00:07:24.222 "model_number": "SPDK bdev Controller", 00:07:24.222 "serial_number": "SPDK0", 00:07:24.222 "firmware_revision": "24.09", 00:07:24.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.222 "oacs": { 00:07:24.222 "security": 0, 00:07:24.222 "format": 0, 00:07:24.222 "firmware": 0, 00:07:24.222 "ns_manage": 0 00:07:24.222 }, 00:07:24.222 "multi_ctrlr": true, 00:07:24.222 "ana_reporting": false 00:07:24.222 }, 00:07:24.222 "vs": { 00:07:24.222 "nvme_version": "1.3" 00:07:24.222 }, 00:07:24.222 "ns_data": { 00:07:24.222 "id": 1, 00:07:24.222 "can_share": true 00:07:24.222 } 00:07:24.222 } 00:07:24.222 ], 00:07:24.222 "mp_policy": "active_passive" 00:07:24.222 } 00:07:24.222 } 00:07:24.222 ] 00:07:24.222 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1433500 00:07:24.222 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:24.222 10:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.222 Running I/O for 10 seconds... 00:07:25.156 Latency(us) 00:07:25.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.156 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:25.156 =================================================================================================================== 00:07:25.156 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:25.156 00:07:26.090 10:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:26.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.348 Nvme0n1 : 2.00 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:07:26.348 =================================================================================================================== 00:07:26.348 Total : 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:07:26.348 00:07:26.348 true 00:07:26.348 10:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:26.348 10:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:26.914 10:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:26.914 10:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:26.914 10:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1433500 00:07:27.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.173 Nvme0n1 : 3.00 14055.00 54.90 0.00 0.00 0.00 0.00 0.00 00:07:27.173 =================================================================================================================== 00:07:27.173 Total : 14055.00 54.90 0.00 0.00 0.00 0.00 0.00 00:07:27.173 00:07:28.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.547 Nvme0n1 : 4.00 14097.25 55.07 0.00 0.00 0.00 0.00 0.00 00:07:28.547 =================================================================================================================== 00:07:28.547 Total : 14097.25 55.07 0.00 0.00 0.00 0.00 0.00 00:07:28.547 00:07:29.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.482 Nvme0n1 : 5.00 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:07:29.482 =================================================================================================================== 00:07:29.482 Total : 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:07:29.482 00:07:30.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.417 Nvme0n1 : 6.00 14184.33 55.41 0.00 0.00 0.00 0.00 0.00 00:07:30.417 =================================================================================================================== 00:07:30.417 Total : 14184.33 55.41 0.00 0.00 0.00 0.00 0.00 00:07:30.417 00:07:31.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.351 Nvme0n1 : 7.00 14199.29 55.47 0.00 0.00 0.00 0.00 0.00 00:07:31.351 =================================================================================================================== 00:07:31.351 Total : 14199.29 55.47 0.00 0.00 0.00 0.00 0.00 00:07:31.351 00:07:32.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.285 Nvme0n1 : 8.00 14218.25 55.54 0.00 0.00 0.00 0.00 0.00 00:07:32.285 =================================================================================================================== 00:07:32.285 Total : 14218.25 55.54 0.00 0.00 0.00 0.00 0.00 00:07:32.285 00:07:33.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.220 Nvme0n1 : 9.00 14247.11 55.65 0.00 0.00 0.00 0.00 0.00 00:07:33.220 =================================================================================================================== 00:07:33.220 Total : 14247.11 55.65 0.00 0.00 0.00 0.00 0.00 00:07:33.220 00:07:34.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.594 Nvme0n1 : 10.00 14265.60 55.73 0.00 0.00 0.00 0.00 0.00 00:07:34.594 =================================================================================================================== 00:07:34.594 Total : 14265.60 55.73 0.00 0.00 0.00 0.00 0.00 00:07:34.594 00:07:34.594 00:07:34.594 Latency(us) 00:07:34.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.594 Nvme0n1 : 10.01 14262.12 55.71 0.00 0.00 8968.34 4369.07 18544.26 00:07:34.594 =================================================================================================================== 00:07:34.594 Total : 14262.12 55.71 0.00 0.00 8968.34 4369.07 18544.26 00:07:34.594 0 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1433394 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1433394 ']' 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1433394 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1433394 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1433394' 00:07:34.594 killing process with pid 1433394 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1433394 00:07:34.594 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.594 00:07:34.594 Latency(us) 00:07:34.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.594 =================================================================================================================== 00:07:34.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.594 10:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1433394 00:07:34.594 10:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.852 10:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.110 10:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:35.110 10:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.371 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.371 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.371 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1431300 00:07:35.371 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1431300 00:07:35.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1431300 Killed "${NVMF_APP[@]}" "$@" 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1434521 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1434521 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1434521 ']' 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.630 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.630 [2024-07-25 10:14:25.222448] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:35.630 [2024-07-25 10:14:25.222556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.630 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.630 [2024-07-25 10:14:25.289163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.630 [2024-07-25 10:14:25.404340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.630 [2024-07-25 10:14:25.404396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.631 [2024-07-25 10:14:25.404413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.631 [2024-07-25 10:14:25.404426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.631 [2024-07-25 10:14:25.404438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.631 [2024-07-25 10:14:25.404475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.889 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.147 [2024-07-25 10:14:25.816725] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:36.147 [2024-07-25 10:14:25.816858] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:36.147 [2024-07-25 10:14:25.816922] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.147 10:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.405 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2db731a-0c64-4e64-8b26-c20e8a882b2d -t 2000 00:07:36.663 [ 00:07:36.663 { 00:07:36.663 "name": "d2db731a-0c64-4e64-8b26-c20e8a882b2d", 00:07:36.663 "aliases": [ 00:07:36.663 "lvs/lvol" 00:07:36.663 ], 00:07:36.663 "product_name": "Logical Volume", 00:07:36.663 "block_size": 4096, 00:07:36.663 "num_blocks": 38912, 00:07:36.663 "uuid": "d2db731a-0c64-4e64-8b26-c20e8a882b2d", 00:07:36.663 "assigned_rate_limits": { 00:07:36.663 "rw_ios_per_sec": 0, 00:07:36.663 "rw_mbytes_per_sec": 0, 00:07:36.663 "r_mbytes_per_sec": 0, 00:07:36.663 "w_mbytes_per_sec": 0 00:07:36.663 }, 00:07:36.663 "claimed": false, 00:07:36.663 "zoned": false, 00:07:36.663 "supported_io_types": { 00:07:36.663 "read": true, 00:07:36.663 "write": true, 00:07:36.663 "unmap": true, 00:07:36.663 "flush": false, 00:07:36.663 "reset": true, 00:07:36.663 "nvme_admin": false, 00:07:36.663 "nvme_io": false, 00:07:36.663 "nvme_io_md": false, 00:07:36.663 "write_zeroes": true, 00:07:36.663 "zcopy": false, 00:07:36.663 "get_zone_info": false, 00:07:36.663 "zone_management": false, 00:07:36.663 "zone_append": false, 00:07:36.663 "compare": false, 00:07:36.663 "compare_and_write": false, 00:07:36.663 "abort": false, 00:07:36.663 "seek_hole": true, 00:07:36.663 "seek_data": true, 00:07:36.663 "copy": false, 00:07:36.663 "nvme_iov_md": false 00:07:36.663 }, 00:07:36.663 "driver_specific": { 00:07:36.663 "lvol": { 00:07:36.663 "lvol_store_uuid": "1de76539-1ad2-443b-bf6a-b9e19913dbcb", 00:07:36.663 "base_bdev": "aio_bdev", 00:07:36.663 "thin_provision": false, 00:07:36.663 "num_allocated_clusters": 38, 00:07:36.663 "snapshot": false, 00:07:36.663 "clone": false, 00:07:36.663 "esnap_clone": false 00:07:36.663 } 00:07:36.663 } 00:07:36.663 } 00:07:36.663 ] 00:07:36.663 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:36.663 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:36.664 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:36.922 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:36.922 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:36.922 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:37.180 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:37.180 10:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.438 [2024-07-25 10:14:27.149885] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.438 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:37.696 request: 00:07:37.696 { 00:07:37.696 "uuid": "1de76539-1ad2-443b-bf6a-b9e19913dbcb", 00:07:37.696 "method": "bdev_lvol_get_lvstores", 00:07:37.696 "req_id": 1 00:07:37.696 } 00:07:37.696 Got JSON-RPC error response 00:07:37.696 response: 00:07:37.696 { 00:07:37.696 "code": -19, 00:07:37.696 "message": "No such device" 00:07:37.696 } 00:07:37.696 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:37.696 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.696 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.696 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.696 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.954 aio_bdev 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.954 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.212 10:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2db731a-0c64-4e64-8b26-c20e8a882b2d -t 2000 00:07:38.470 [ 00:07:38.470 { 00:07:38.470 "name": "d2db731a-0c64-4e64-8b26-c20e8a882b2d", 00:07:38.470 "aliases": [ 00:07:38.470 "lvs/lvol" 00:07:38.470 ], 00:07:38.470 "product_name": "Logical Volume", 00:07:38.470 "block_size": 4096, 00:07:38.470 "num_blocks": 38912, 00:07:38.470 "uuid": "d2db731a-0c64-4e64-8b26-c20e8a882b2d", 00:07:38.470 "assigned_rate_limits": { 00:07:38.470 "rw_ios_per_sec": 0, 00:07:38.470 "rw_mbytes_per_sec": 0, 00:07:38.470 "r_mbytes_per_sec": 0, 00:07:38.470 "w_mbytes_per_sec": 0 00:07:38.470 }, 00:07:38.470 "claimed": false, 00:07:38.470 "zoned": false, 00:07:38.470 "supported_io_types": { 00:07:38.470 "read": true, 00:07:38.470 "write": true, 00:07:38.470 "unmap": true, 00:07:38.470 "flush": false, 00:07:38.470 "reset": true, 00:07:38.470 "nvme_admin": false, 00:07:38.470 "nvme_io": false, 00:07:38.470 "nvme_io_md": false, 00:07:38.470 "write_zeroes": true, 00:07:38.470 "zcopy": false, 00:07:38.470 "get_zone_info": false, 00:07:38.470 "zone_management": false, 00:07:38.470 "zone_append": false, 00:07:38.470 "compare": false, 00:07:38.470 "compare_and_write": false, 00:07:38.470 "abort": false, 00:07:38.470 "seek_hole": true, 00:07:38.470 "seek_data": true, 00:07:38.470 "copy": false, 00:07:38.470 "nvme_iov_md": false 00:07:38.470 }, 00:07:38.470 "driver_specific": { 00:07:38.470 "lvol": { 00:07:38.470 "lvol_store_uuid": "1de76539-1ad2-443b-bf6a-b9e19913dbcb", 00:07:38.470 "base_bdev": "aio_bdev", 00:07:38.470 "thin_provision": false, 00:07:38.470 "num_allocated_clusters": 38, 00:07:38.470 "snapshot": false, 00:07:38.470 "clone": false, 00:07:38.470 "esnap_clone": false 00:07:38.470 } 00:07:38.470 } 00:07:38.470 } 00:07:38.470 ] 00:07:38.470 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:38.470 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:38.470 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.036 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.036 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:39.036 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.293 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.293 10:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2db731a-0c64-4e64-8b26-c20e8a882b2d 00:07:39.551 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1de76539-1ad2-443b-bf6a-b9e19913dbcb 00:07:39.808 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.066 00:07:40.066 real 0m19.987s 00:07:40.066 user 0m50.509s 00:07:40.066 sys 0m4.350s 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 ************************************ 00:07:40.066 END TEST lvs_grow_dirty 00:07:40.066 ************************************ 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.066 nvmf_trace.0 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.066 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.066 rmmod nvme_tcp 00:07:40.324 rmmod nvme_fabrics 00:07:40.324 rmmod nvme_keyring 00:07:40.324 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.324 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:40.324 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:40.324 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1434521 ']' 00:07:40.324 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1434521 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1434521 ']' 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1434521 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1434521 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1434521' 00:07:40.325 killing process with pid 1434521 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1434521 00:07:40.325 10:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1434521 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.584 10:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.488 00:07:42.488 real 0m42.689s 00:07:42.488 user 1m13.594s 00:07:42.488 sys 0m7.889s 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.488 ************************************ 00:07:42.488 END TEST nvmf_lvs_grow 00:07:42.488 ************************************ 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.488 ************************************ 00:07:42.488 START TEST nvmf_bdev_io_wait 00:07:42.488 ************************************ 00:07:42.488 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.747 * Looking for test storage... 00:07:42.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.747 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.748 10:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.656 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:44.657 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:44.657 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:44.657 Found net devices under 0000:08:00.0: cvl_0_0 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:44.657 Found net devices under 0000:08:00.1: cvl_0_1 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.657 10:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:07:44.657 00:07:44.657 --- 10.0.0.2 ping statistics --- 00:07:44.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.657 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:07:44.657 00:07:44.657 --- 10.0.0.1 ping statistics --- 00:07:44.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.657 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1436488 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1436488 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1436488 ']' 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.657 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 [2024-07-25 10:14:34.137574] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:44.657 [2024-07-25 10:14:34.137673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.657 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.657 [2024-07-25 10:14:34.204116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.657 [2024-07-25 10:14:34.325387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.657 [2024-07-25 10:14:34.325454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.657 [2024-07-25 10:14:34.325470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.657 [2024-07-25 10:14:34.325492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.657 [2024-07-25 10:14:34.325504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.657 [2024-07-25 10:14:34.325589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.657 [2024-07-25 10:14:34.325670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.657 [2024-07-25 10:14:34.325747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.658 [2024-07-25 10:14:34.325780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.658 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 [2024-07-25 10:14:34.488085] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 Malloc0 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.946 [2024-07-25 10:14:34.550322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1436602 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1436603 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1436605 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:44.946 { 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme$subsystem", 00:07:44.946 "trtype": "$TEST_TRANSPORT", 00:07:44.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.946 "adrfam": "ipv4", 00:07:44.946 "trsvcid": "$NVMF_PORT", 00:07:44.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.946 "hdgst": ${hdgst:-false}, 00:07:44.946 "ddgst": ${ddgst:-false} 00:07:44.946 }, 00:07:44.946 "method": "bdev_nvme_attach_controller" 00:07:44.946 } 00:07:44.946 EOF 00:07:44.946 )") 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1436608 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:44.946 { 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme$subsystem", 00:07:44.946 "trtype": "$TEST_TRANSPORT", 00:07:44.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.946 "adrfam": "ipv4", 00:07:44.946 "trsvcid": "$NVMF_PORT", 00:07:44.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.946 "hdgst": ${hdgst:-false}, 00:07:44.946 "ddgst": ${ddgst:-false} 00:07:44.946 }, 00:07:44.946 "method": "bdev_nvme_attach_controller" 00:07:44.946 } 00:07:44.946 EOF 00:07:44.946 )") 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:44.946 { 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme$subsystem", 00:07:44.946 "trtype": "$TEST_TRANSPORT", 00:07:44.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.946 "adrfam": "ipv4", 00:07:44.946 "trsvcid": "$NVMF_PORT", 00:07:44.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.946 "hdgst": ${hdgst:-false}, 00:07:44.946 "ddgst": ${ddgst:-false} 00:07:44.946 }, 00:07:44.946 "method": "bdev_nvme_attach_controller" 00:07:44.946 } 00:07:44.946 EOF 00:07:44.946 )") 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:44.946 { 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme$subsystem", 00:07:44.946 "trtype": "$TEST_TRANSPORT", 00:07:44.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.946 "adrfam": "ipv4", 00:07:44.946 "trsvcid": "$NVMF_PORT", 00:07:44.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.946 "hdgst": ${hdgst:-false}, 00:07:44.946 "ddgst": ${ddgst:-false} 00:07:44.946 }, 00:07:44.946 "method": "bdev_nvme_attach_controller" 00:07:44.946 } 00:07:44.946 EOF 00:07:44.946 )") 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1436602 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme1", 00:07:44.946 "trtype": "tcp", 00:07:44.946 "traddr": "10.0.0.2", 00:07:44.946 "adrfam": "ipv4", 00:07:44.946 "trsvcid": "4420", 00:07:44.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.946 "hdgst": false, 00:07:44.946 "ddgst": false 00:07:44.946 }, 00:07:44.946 "method": "bdev_nvme_attach_controller" 00:07:44.946 }' 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:44.946 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:44.946 "params": { 00:07:44.946 "name": "Nvme1", 00:07:44.946 "trtype": "tcp", 00:07:44.947 "traddr": "10.0.0.2", 00:07:44.947 "adrfam": "ipv4", 00:07:44.947 "trsvcid": "4420", 00:07:44.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.947 "hdgst": false, 00:07:44.947 "ddgst": false 00:07:44.947 }, 00:07:44.947 "method": "bdev_nvme_attach_controller" 00:07:44.947 }' 00:07:44.947 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:44.947 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:44.947 "params": { 00:07:44.947 "name": "Nvme1", 00:07:44.947 "trtype": "tcp", 00:07:44.947 "traddr": "10.0.0.2", 00:07:44.947 "adrfam": "ipv4", 00:07:44.947 "trsvcid": "4420", 00:07:44.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.947 "hdgst": false, 00:07:44.947 "ddgst": false 00:07:44.947 }, 00:07:44.947 "method": "bdev_nvme_attach_controller" 00:07:44.947 }' 00:07:44.947 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:44.947 10:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:44.947 "params": { 00:07:44.947 "name": "Nvme1", 00:07:44.947 "trtype": "tcp", 00:07:44.947 "traddr": "10.0.0.2", 00:07:44.947 "adrfam": "ipv4", 00:07:44.947 "trsvcid": "4420", 00:07:44.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.947 "hdgst": false, 00:07:44.947 "ddgst": false 00:07:44.947 }, 00:07:44.947 "method": "bdev_nvme_attach_controller" 00:07:44.947 }' 00:07:44.947 [2024-07-25 10:14:34.602908] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:44.947 [2024-07-25 10:14:34.602908] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:44.947 [2024-07-25 10:14:34.603006] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:44.947 [2024-07-25 10:14:34.603010] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:44.947 [2024-07-25 10:14:34.603915] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:44.947 [2024-07-25 10:14:34.604004] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:44.947 [2024-07-25 10:14:34.605179] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:44.947 [2024-07-25 10:14:34.605247] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:44.947 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.229 [2024-07-25 10:14:34.745724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.229 [2024-07-25 10:14:34.815685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.229 [2024-07-25 10:14:34.841807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:45.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.229 [2024-07-25 10:14:34.885645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.229 [2024-07-25 10:14:34.912265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:45.229 [2024-07-25 10:14:34.951203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.229 [2024-07-25 10:14:34.983081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.488 [2024-07-25 10:14:35.046753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:45.488 Running I/O for 1 seconds... 00:07:45.747 Running I/O for 1 seconds... 00:07:45.747 Running I/O for 1 seconds... 00:07:45.747 Running I/O for 1 seconds... 00:07:46.685 00:07:46.686 Latency(us) 00:07:46.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.686 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:46.686 Nvme1n1 : 1.02 5377.09 21.00 0.00 0.00 23581.53 10097.40 35535.08 00:07:46.686 =================================================================================================================== 00:07:46.686 Total : 5377.09 21.00 0.00 0.00 23581.53 10097.40 35535.08 00:07:46.686 00:07:46.686 Latency(us) 00:07:46.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.686 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:46.686 Nvme1n1 : 1.00 124352.54 485.75 0.00 0.00 1025.10 347.40 1377.47 00:07:46.686 =================================================================================================================== 00:07:46.686 Total : 124352.54 485.75 0.00 0.00 1025.10 347.40 1377.47 00:07:46.686 00:07:46.686 Latency(us) 00:07:46.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.686 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:46.686 Nvme1n1 : 1.01 5390.00 21.05 0.00 0.00 23659.20 6456.51 43884.85 00:07:46.686 =================================================================================================================== 00:07:46.686 Total : 5390.00 21.05 0.00 0.00 23659.20 6456.51 43884.85 00:07:46.686 00:07:46.686 Latency(us) 00:07:46.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.686 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:46.686 Nvme1n1 : 1.01 7280.25 28.44 0.00 0.00 17506.33 7427.41 29515.47 00:07:46.686 =================================================================================================================== 00:07:46.686 Total : 7280.25 28.44 0.00 0.00 17506.33 7427.41 29515.47 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1436603 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1436605 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1436608 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.945 rmmod nvme_tcp 00:07:46.945 rmmod nvme_fabrics 00:07:46.945 rmmod nvme_keyring 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1436488 ']' 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1436488 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1436488 ']' 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1436488 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1436488 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1436488' 00:07:46.945 killing process with pid 1436488 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1436488 00:07:46.945 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1436488 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.204 10:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.743 00:07:49.743 real 0m6.736s 00:07:49.743 user 0m15.944s 00:07:49.743 sys 0m3.288s 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.743 ************************************ 00:07:49.743 END TEST nvmf_bdev_io_wait 00:07:49.743 ************************************ 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.743 10:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.743 ************************************ 00:07:49.743 START TEST nvmf_queue_depth 00:07:49.743 ************************************ 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.743 * Looking for test storage... 00:07:49.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.743 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.744 10:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:51.125 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:51.125 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:51.125 Found net devices under 0000:08:00.0: cvl_0_0 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:51.125 Found net devices under 0000:08:00.1: cvl_0_1 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.125 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:51.126 00:07:51.126 --- 10.0.0.2 ping statistics --- 00:07:51.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.126 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:51.126 00:07:51.126 --- 10.0.0.1 ping statistics --- 00:07:51.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.126 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.126 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1438322 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1438322 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1438322 ']' 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.383 10:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 [2024-07-25 10:14:40.971016] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:51.383 [2024-07-25 10:14:40.971116] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.383 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.383 [2024-07-25 10:14:41.038603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.383 [2024-07-25 10:14:41.156521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.383 [2024-07-25 10:14:41.156589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.384 [2024-07-25 10:14:41.156605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.384 [2024-07-25 10:14:41.156619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.384 [2024-07-25 10:14:41.156632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.384 [2024-07-25 10:14:41.156670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 [2024-07-25 10:14:41.282988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 Malloc0 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 [2024-07-25 10:14:41.339148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1438351 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1438351 /var/tmp/bdevperf.sock 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1438351 ']' 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.641 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.641 [2024-07-25 10:14:41.392295] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:07:51.641 [2024-07-25 10:14:41.392391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438351 ] 00:07:51.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.898 [2024-07-25 10:14:41.453516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.898 [2024-07-25 10:14:41.570082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.898 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.898 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:51.898 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:51.898 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.898 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.156 NVMe0n1 00:07:52.156 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.156 10:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:52.413 Running I/O for 10 seconds... 00:08:04.610 00:08:04.610 Latency(us) 00:08:04.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.610 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:04.610 Verification LBA range: start 0x0 length 0x4000 00:08:04.610 NVMe0n1 : 10.11 7566.98 29.56 0.00 0.00 134569.45 28932.93 86604.61 00:08:04.610 =================================================================================================================== 00:08:04.610 Total : 7566.98 29.56 0.00 0.00 134569.45 28932.93 86604.61 00:08:04.610 0 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1438351 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1438351 ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1438351 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1438351 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1438351' 00:08:04.610 killing process with pid 1438351 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1438351 00:08:04.610 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.610 00:08:04.610 Latency(us) 00:08:04.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.610 =================================================================================================================== 00:08:04.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1438351 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.610 rmmod nvme_tcp 00:08:04.610 rmmod nvme_fabrics 00:08:04.610 rmmod nvme_keyring 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1438322 ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1438322 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1438322 ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1438322 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1438322 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.610 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1438322' 00:08:04.611 killing process with pid 1438322 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1438322 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1438322 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.611 10:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.180 00:08:05.180 real 0m15.758s 00:08:05.180 user 0m20.489s 00:08:05.180 sys 0m3.714s 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:05.180 ************************************ 00:08:05.180 END TEST nvmf_queue_depth 00:08:05.180 ************************************ 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.180 ************************************ 00:08:05.180 START TEST nvmf_target_multipath 00:08:05.180 ************************************ 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:05.180 * Looking for test storage... 00:08:05.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.180 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.181 10:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:07.090 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:07.091 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:07.091 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:07.091 Found net devices under 0000:08:00.0: cvl_0_0 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:07.091 Found net devices under 0000:08:00.1: cvl_0_1 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:08:07.091 00:08:07.091 --- 10.0.0.2 ping statistics --- 00:08:07.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.091 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:08:07.091 00:08:07.091 --- 10.0.0.1 ping statistics --- 00:08:07.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.091 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:07.091 only one NIC for nvmf test 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:07.091 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.092 rmmod nvme_tcp 00:08:07.092 rmmod nvme_fabrics 00:08:07.092 rmmod nvme_keyring 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.092 10:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.636 00:08:09.636 real 0m4.087s 00:08:09.636 user 0m0.708s 00:08:09.636 sys 0m1.354s 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:09.636 ************************************ 00:08:09.636 END TEST nvmf_target_multipath 00:08:09.636 ************************************ 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:09.636 10:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.637 10:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.637 10:14:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.637 ************************************ 00:08:09.637 START TEST nvmf_zcopy 00:08:09.637 ************************************ 00:08:09.637 10:14:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:09.637 * Looking for test storage... 00:08:09.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.637 10:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.020 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:11.021 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:11.021 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:11.021 Found net devices under 0000:08:00.0: cvl_0_0 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:11.021 Found net devices under 0000:08:00.1: cvl_0_1 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:11.021 00:08:11.021 --- 10.0.0.2 ping statistics --- 00:08:11.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.021 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:11.021 00:08:11.021 --- 10.0.0.1 ping statistics --- 00:08:11.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.021 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1442350 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1442350 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1442350 ']' 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.021 10:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 [2024-07-25 10:15:00.837154] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:08:11.279 [2024-07-25 10:15:00.837254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.279 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.279 [2024-07-25 10:15:00.902315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.279 [2024-07-25 10:15:01.019311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.279 [2024-07-25 10:15:01.019373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.279 [2024-07-25 10:15:01.019389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.279 [2024-07-25 10:15:01.019403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.279 [2024-07-25 10:15:01.019415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.279 [2024-07-25 10:15:01.019445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 [2024-07-25 10:15:01.155533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 [2024-07-25 10:15:01.171697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 malloc0 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:11.538 { 00:08:11.538 "params": { 00:08:11.538 "name": "Nvme$subsystem", 00:08:11.538 "trtype": "$TEST_TRANSPORT", 00:08:11.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.538 "adrfam": "ipv4", 00:08:11.538 "trsvcid": "$NVMF_PORT", 00:08:11.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.538 "hdgst": ${hdgst:-false}, 00:08:11.538 "ddgst": ${ddgst:-false} 00:08:11.538 }, 00:08:11.538 "method": "bdev_nvme_attach_controller" 00:08:11.538 } 00:08:11.538 EOF 00:08:11.538 )") 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:11.538 10:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:11.538 "params": { 00:08:11.538 "name": "Nvme1", 00:08:11.538 "trtype": "tcp", 00:08:11.538 "traddr": "10.0.0.2", 00:08:11.538 "adrfam": "ipv4", 00:08:11.538 "trsvcid": "4420", 00:08:11.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:11.538 "hdgst": false, 00:08:11.538 "ddgst": false 00:08:11.538 }, 00:08:11.538 "method": "bdev_nvme_attach_controller" 00:08:11.538 }' 00:08:11.538 [2024-07-25 10:15:01.263131] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:08:11.538 [2024-07-25 10:15:01.263232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442424 ] 00:08:11.538 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.796 [2024-07-25 10:15:01.325248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.796 [2024-07-25 10:15:01.443078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.054 Running I/O for 10 seconds... 00:08:24.245 00:08:24.245 Latency(us) 00:08:24.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.245 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:24.245 Verification LBA range: start 0x0 length 0x1000 00:08:24.245 Nvme1n1 : 10.06 5386.41 42.08 0.00 0.00 23600.96 4466.16 46797.56 00:08:24.245 =================================================================================================================== 00:08:24.245 Total : 5386.41 42.08 0.00 0.00 23600.96 4466.16 46797.56 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1443993 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.245 { 00:08:24.245 "params": { 00:08:24.245 "name": "Nvme$subsystem", 00:08:24.245 "trtype": "$TEST_TRANSPORT", 00:08:24.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.245 "adrfam": "ipv4", 00:08:24.245 "trsvcid": "$NVMF_PORT", 00:08:24.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.245 "hdgst": ${hdgst:-false}, 00:08:24.245 "ddgst": ${ddgst:-false} 00:08:24.245 }, 00:08:24.245 "method": "bdev_nvme_attach_controller" 00:08:24.245 } 00:08:24.245 EOF 00:08:24.245 )") 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:24.245 [2024-07-25 10:15:12.074808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.074850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:24.245 10:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.245 "params": { 00:08:24.245 "name": "Nvme1", 00:08:24.245 "trtype": "tcp", 00:08:24.245 "traddr": "10.0.0.2", 00:08:24.245 "adrfam": "ipv4", 00:08:24.245 "trsvcid": "4420", 00:08:24.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.245 "hdgst": false, 00:08:24.245 "ddgst": false 00:08:24.245 }, 00:08:24.245 "method": "bdev_nvme_attach_controller" 00:08:24.245 }' 00:08:24.245 [2024-07-25 10:15:12.082780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.082806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.090800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.090824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.098820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.098844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.106843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.106867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.114870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.114895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.118773] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:08:24.245 [2024-07-25 10:15:12.118860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443993 ] 00:08:24.245 [2024-07-25 10:15:12.122888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.122911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.130912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.130943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.138934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.138957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.146957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.146979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.245 [2024-07-25 10:15:12.154981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.155004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.163001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.163025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.171024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.171047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.179048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.179071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.179398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.245 [2024-07-25 10:15:12.187130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.187177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.195132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.195176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.245 [2024-07-25 10:15:12.203120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.245 [2024-07-25 10:15:12.203143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.211152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.211182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.219163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.219186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.227191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.227217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.235208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.235233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.243280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.243328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.251283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.251329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.259274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.259297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.267309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.267339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.275319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.275353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.283352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.283382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.291366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.291390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.296040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.246 [2024-07-25 10:15:12.299386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.299410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.307422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.307450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.315498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.315555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.323520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.323569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.331543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.331592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.339558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.339607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.347585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.347634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.355575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.355611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.363627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.363679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.371649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.371699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.379642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.379679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.387638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.387661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.395661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.395684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.403705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.403734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.411720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.411752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.419739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.419766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.427762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.427788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.435781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.435806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.443807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.443830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.451838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.451862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.459858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.459887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.467877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.467910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.475901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.475926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.483931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.483971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.491942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.491966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.500740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.500768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.507998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.508031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 Running I/O for 5 seconds... 00:08:24.246 [2024-07-25 10:15:12.516022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.246 [2024-07-25 10:15:12.516054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.246 [2024-07-25 10:15:12.530526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.530558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.542551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.542581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.554936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.554965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.567089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.567119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.580960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.580989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.591799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.591829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.603305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.603340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.615662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.615692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.627659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.627689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.639895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.639924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.651927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.651956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.665906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.665960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.677721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.677750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.689806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.689835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.701756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.701785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.713672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.713702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.725465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.725502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.737142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.737171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.749284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.749314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.760837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.760868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.772512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.772541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.784429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.784463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.796241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.796273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.808092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.808122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.822306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.822335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.833844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.833890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.845869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.845900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.857896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.857925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.869533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.869561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.881656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.881685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.893520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.893549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.905560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.905589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.919495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.919524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.930529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.930564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.943409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.943437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.955015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.955043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.967273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.967303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.979392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.979421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:12.991354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:12.991390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.003059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.003092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.015530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.015566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.027834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.027864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.039691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.039723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.051667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.051695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.063468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.063518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.075438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.247 [2024-07-25 10:15:13.075467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.247 [2024-07-25 10:15:13.087229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.087257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.098909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.098937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.110548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.110576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.122474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.122509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.134606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.134635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.146410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.146438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.158526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.158554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.170241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.170269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.182027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.182055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.194190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.194219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.206281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.206312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.218351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.218379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.230240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.230267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.242451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.242487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.253801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.253829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.265893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.265922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.277999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.278028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.290249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.290288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.302650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.302679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.314918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.314947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.326451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.326490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.338290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.338318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.350608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.350636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.362642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.362671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.374926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.374954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.386600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.386628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.398212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.398240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.409725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.409753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.421423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.421452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.433216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.433245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.445141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.445169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.457040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.457068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.469007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.469038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.480801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.480829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.492544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.492571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.504373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.504401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.516319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.516366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.528309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.528340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.540161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.540189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.551860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.551888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.248 [2024-07-25 10:15:13.563819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.248 [2024-07-25 10:15:13.563847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.575608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.575636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.587651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.587687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.599909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.599945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.611981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.612009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.623635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.623663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.637069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.637097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.648291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.648320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.659937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.659966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.671450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.671477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.683386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.683414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.695430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.695458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.707265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.707293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.719252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.719280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.730969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.730997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.742837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.742865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.755323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.755352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.767062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.767106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.779630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.779665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.791548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.791577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.803204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.803233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.815200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.815236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.827286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.827323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.839385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.839414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.851627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.851655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.863976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.864010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.876116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.876161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.888069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.888102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.901819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.901857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.913291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.913319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.925051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.925080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.936887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.936917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.950871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.950901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.962153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.962182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.973890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.973919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.985794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.985823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:13.998003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:13.998032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.249 [2024-07-25 10:15:14.010262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.249 [2024-07-25 10:15:14.010291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.022439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.022469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.034899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.034927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.047131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.047160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.058841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.058869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.072607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.072636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.084279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.084308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.096735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.096763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.109203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.109231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.121361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.121390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.133134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.133162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.144987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.145024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.157341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.157370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.169776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.169805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.181914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.181943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.193703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.193731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.205816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.205844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.217918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.217946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.230228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.230256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.242256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.242285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.254366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.254395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.266616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.266646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.510 [2024-07-25 10:15:14.278452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.510 [2024-07-25 10:15:14.278488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.291040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.291070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.302819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.302849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.314590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.314618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.326550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.326578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.338391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.338419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.350153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.350181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.364144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.364172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.375285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.375313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.388073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.388101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.399757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.399786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.411360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.411388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.423802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.423830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.435983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.436011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.448293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.448322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.460062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.460091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.471925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.471952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.485669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.485702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.497096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.497124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.508973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.509002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.520629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.520657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.532518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.532546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.770 [2024-07-25 10:15:14.544673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.770 [2024-07-25 10:15:14.544704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.556886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.556917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.568978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.569006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.580912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.580941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.592913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.592942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.604821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.604849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.618794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.618822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.629963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.629991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.641880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.641909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.653458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.653505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.665288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.665316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.676933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.676961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.689033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.689061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.701143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.701171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.713339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.713373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.725584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.725613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.737554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.737582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.749518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.749549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.761507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.761537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.773390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.773419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.785118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.785147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.030 [2024-07-25 10:15:14.797219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.030 [2024-07-25 10:15:14.797247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.809489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.809518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.821402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.821431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.833218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.833247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.845370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.845398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.857387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.857415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.871424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.871452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.882926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.882973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.895148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.895176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.907324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.907352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.919081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.919109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.930822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.930850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.942807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.942843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.955034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.955062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.967036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.967065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.978937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.978966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:14.990977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:14.991008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.002758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.002787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.015183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.015211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.029650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.029680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.041180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.041209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.053061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.053091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.288 [2024-07-25 10:15:15.064927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.288 [2024-07-25 10:15:15.064957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.077215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.077244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.089742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.089771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.102313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.102342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.114574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.114622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.126859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.126887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.138927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.138955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.151056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.151085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.162847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.162875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.174691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.174720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.186629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.186657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.198968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.198996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.211309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.211337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.223234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.223271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.235382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.235410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.247400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.247428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.259559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.259586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.271583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.271612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.283603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.283632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.295995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.296023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.308212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.308240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.546 [2024-07-25 10:15:15.320426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.546 [2024-07-25 10:15:15.320454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.332684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.332712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.345629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.345669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.358000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.358028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.369996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.370024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.382170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.382198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.394088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.394117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.406211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.406239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.418185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.418213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.430422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.430457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.442290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.442317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.454666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.454694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.466572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.466601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.478131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.478159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.490233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.490261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.502351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.502379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.514108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.514144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.526274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.526303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.538697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.538726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.550610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.550638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.563105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.563133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.804 [2024-07-25 10:15:15.574872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.804 [2024-07-25 10:15:15.574900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.586772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.586801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.598757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.598786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.610867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.610894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.623121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.623150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.635011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.635038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.647092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.647120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.659207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.659236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.671472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.671511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.683666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.683694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.696125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.696153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.708311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.708339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.720456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.720492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.734627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.734656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.745635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.745663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.758397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.758425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.770616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.770644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.783109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.783136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.794762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.794790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.808793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.808822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.820114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.820143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.062 [2024-07-25 10:15:15.832195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.062 [2024-07-25 10:15:15.832224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.844667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.844696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.856812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.856841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.868691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.868720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.882593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.882621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.893832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.893860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.906124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.906153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.918370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.918398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.930542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.930569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.942586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.942615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.954301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.954329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.966376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.966404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.978250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.978286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:15.990243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:15.990271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.002116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.002154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.014391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.014420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.026734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.026763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.039524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.039553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.051255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.051293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.062879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.062907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.074410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.074447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.321 [2024-07-25 10:15:16.086141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.321 [2024-07-25 10:15:16.086170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.098478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.098513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.110911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.110939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.122471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.122509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.134430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.134470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.146855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.146885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.158966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.158995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.170718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.170747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.182363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.182392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.194701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.194733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.206856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.580 [2024-07-25 10:15:16.206885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.580 [2024-07-25 10:15:16.218747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.218776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.230610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.230639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.242632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.242662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.255016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.255054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.267325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.267354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.279703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.279731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.291785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.291816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.303880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.303908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.316040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.316076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.328103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.328132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.340683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.340711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.581 [2024-07-25 10:15:16.352599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.581 [2024-07-25 10:15:16.352628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.364806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.364835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.376275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.376303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.388092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.388122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.400290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.400318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.412262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.412290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.424270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.424298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.436265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.436293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.448760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.448788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.460771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.460800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.472841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.472870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.484888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.484933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.497108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.497136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.509342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.509371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.521630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.521658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.533680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.533708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.545611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.545639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.557735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.557764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.569839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.569875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.581715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.581743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.593691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.593719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.839 [2024-07-25 10:15:16.605708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.839 [2024-07-25 10:15:16.605737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.097 [2024-07-25 10:15:16.619924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.619952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.631582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.631609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.643402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.643431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.655089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.655118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.668801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.668829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.679915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.679943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.691947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.691976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.704085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.704113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.716214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.716254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.727933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.727961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.739732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.739760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.752009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.752038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.764048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.764076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.776070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.776098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.787838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.787866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.799638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.799670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.811177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.811205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.822951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.822979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.835403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.835431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.849686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.849715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.861113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.861141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.098 [2024-07-25 10:15:16.873429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.098 [2024-07-25 10:15:16.873457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.885637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.885665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.897505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.897562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.911534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.911563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.922967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.922995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.934721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.934749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.946545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.946583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.958389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.958417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.969992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.970020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.981816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.981844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:16.994101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:16.994130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.005820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.005849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.016892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.016920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.028584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.028612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.040461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.040496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.052892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.052920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.065249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.065277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.077460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.358 [2024-07-25 10:15:17.077496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.358 [2024-07-25 10:15:17.089571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.359 [2024-07-25 10:15:17.089599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.359 [2024-07-25 10:15:17.101818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.359 [2024-07-25 10:15:17.101846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.359 [2024-07-25 10:15:17.114173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.359 [2024-07-25 10:15:17.114212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.359 [2024-07-25 10:15:17.126281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.359 [2024-07-25 10:15:17.126309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.138352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.138382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.150213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.150242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.161914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.161946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.173451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.173500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.185423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.185451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.197579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.197607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.209279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.209307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.221261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.221289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.233154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.233183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.247303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.247331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.259028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.259061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.271000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.271030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.282934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.282963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.294917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.294946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.311495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.311531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.321372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.321403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.334475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.334512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.346911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.346941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.359086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.359114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.371217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.371246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.383329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.383357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.634 [2024-07-25 10:15:17.395404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.634 [2024-07-25 10:15:17.395432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.407515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.407547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.419248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.419279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.431326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.431359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.443269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.443297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.455736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.455765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.467657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.467685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.479637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.479665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.491125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.491156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.503108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.503136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.515340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.515368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.527474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.527511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.535749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.535777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 00:08:27.901 Latency(us) 00:08:27.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.901 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:27.901 Nvme1n1 : 5.01 10569.33 82.57 0.00 0.00 12092.41 5728.33 25631.86 00:08:27.901 =================================================================================================================== 00:08:27.901 Total : 10569.33 82.57 0.00 0.00 12092.41 5728.33 25631.86 00:08:27.901 [2024-07-25 10:15:17.542853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.542879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.550876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.550903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.558911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.558946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.566994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.567058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.575007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.575065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.901 [2024-07-25 10:15:17.583033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.901 [2024-07-25 10:15:17.583097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.591042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.591099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.599081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.599146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.607107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.607173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.615117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.615173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.623140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.623197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.631142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.631190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.639119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.639143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.647155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.647187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.655169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.655197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.663187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.663212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.902 [2024-07-25 10:15:17.671274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.902 [2024-07-25 10:15:17.671336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.679288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.679335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.687254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.687280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.695286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.695317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.703302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.703329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.711322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.711348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.719408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.719473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.727432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.727500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.735380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.735403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.743404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.743427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.160 [2024-07-25 10:15:17.751427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.160 [2024-07-25 10:15:17.751451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1443993) - No such process 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1443993 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.161 delay0 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.161 10:15:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:28.161 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.161 [2024-07-25 10:15:17.916617] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:36.270 Initializing NVMe Controllers 00:08:36.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:36.270 Initialization complete. Launching workers. 00:08:36.270 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 20158 00:08:36.270 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20282, failed to submit 116 00:08:36.270 success 20195, unsuccess 87, failed 0 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.270 rmmod nvme_tcp 00:08:36.270 rmmod nvme_fabrics 00:08:36.270 rmmod nvme_keyring 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1442350 ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1442350 ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442350' 00:08:36.270 killing process with pid 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1442350 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.270 10:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.179 00:08:38.179 real 0m28.492s 00:08:38.179 user 0m42.369s 00:08:38.179 sys 0m8.542s 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.179 ************************************ 00:08:38.179 END TEST nvmf_zcopy 00:08:38.179 ************************************ 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.179 ************************************ 00:08:38.179 START TEST nvmf_nmic 00:08:38.179 ************************************ 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:38.179 * Looking for test storage... 00:08:38.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.179 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.180 10:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.560 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:39.561 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:39.561 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:39.561 Found net devices under 0000:08:00.0: cvl_0_0 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:39.561 Found net devices under 0000:08:00.1: cvl_0_1 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:08:39.561 00:08:39.561 --- 10.0.0.2 ping statistics --- 00:08:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.561 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:08:39.561 00:08:39.561 --- 10.0.0.1 ping statistics --- 00:08:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.561 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1446613 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1446613 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1446613 ']' 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.561 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.562 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:39.820 [2024-07-25 10:15:29.374328] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:08:39.820 [2024-07-25 10:15:29.374422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.820 [2024-07-25 10:15:29.443788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.820 [2024-07-25 10:15:29.562177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.820 [2024-07-25 10:15:29.562241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.820 [2024-07-25 10:15:29.562257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.820 [2024-07-25 10:15:29.562274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.820 [2024-07-25 10:15:29.562287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.820 [2024-07-25 10:15:29.562384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.820 [2024-07-25 10:15:29.562447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.820 [2024-07-25 10:15:29.562510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.820 [2024-07-25 10:15:29.562507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 [2024-07-25 10:15:29.707669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 Malloc0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 [2024-07-25 10:15:29.756640] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:40.079 test case1: single bdev can't be used in multiple subsystems 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 [2024-07-25 10:15:29.780457] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:40.079 [2024-07-25 10:15:29.780495] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:40.079 [2024-07-25 10:15:29.780513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 request: 00:08:40.079 { 00:08:40.079 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:40.079 "namespace": { 00:08:40.079 "bdev_name": "Malloc0", 00:08:40.079 "no_auto_visible": false 00:08:40.079 }, 00:08:40.079 "method": "nvmf_subsystem_add_ns", 00:08:40.079 "req_id": 1 00:08:40.079 } 00:08:40.079 Got JSON-RPC error response 00:08:40.079 response: 00:08:40.079 { 00:08:40.079 "code": -32602, 00:08:40.079 "message": "Invalid parameters" 00:08:40.079 } 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:40.079 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:40.080 Adding namespace failed - expected result. 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:40.080 test case2: host connect to nvmf target in multiple paths 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.080 [2024-07-25 10:15:29.792602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.080 10:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.646 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:41.213 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.213 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:41.213 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.213 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:41.213 10:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:43.110 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:43.110 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:43.110 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.110 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:43.111 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.111 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:43.111 10:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:43.111 [global] 00:08:43.111 thread=1 00:08:43.111 invalidate=1 00:08:43.111 rw=write 00:08:43.111 time_based=1 00:08:43.111 runtime=1 00:08:43.111 ioengine=libaio 00:08:43.111 direct=1 00:08:43.111 bs=4096 00:08:43.111 iodepth=1 00:08:43.111 norandommap=0 00:08:43.111 numjobs=1 00:08:43.111 00:08:43.111 verify_dump=1 00:08:43.111 verify_backlog=512 00:08:43.111 verify_state_save=0 00:08:43.111 do_verify=1 00:08:43.111 verify=crc32c-intel 00:08:43.111 [job0] 00:08:43.111 filename=/dev/nvme0n1 00:08:43.111 Could not set queue depth (nvme0n1) 00:08:43.369 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.369 fio-3.35 00:08:43.369 Starting 1 thread 00:08:44.741 00:08:44.741 job0: (groupid=0, jobs=1): err= 0: pid=1447097: Thu Jul 25 10:15:34 2024 00:08:44.741 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:08:44.741 slat (nsec): min=8441, max=41270, avg=26173.18, stdev=9441.80 00:08:44.741 clat (usec): min=41449, max=42048, avg=41944.31, stdev=115.39 00:08:44.741 lat (usec): min=41458, max=42084, avg=41970.49, stdev=118.75 00:08:44.741 clat percentiles (usec): 00:08:44.741 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:08:44.741 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:08:44.741 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:44.741 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:44.741 | 99.99th=[42206] 00:08:44.741 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:08:44.741 slat (nsec): min=7867, max=42721, avg=16800.03, stdev=6345.35 00:08:44.741 clat (usec): min=155, max=372, avg=200.94, stdev=27.63 00:08:44.741 lat (usec): min=164, max=400, avg=217.74, stdev=31.18 00:08:44.741 clat percentiles (usec): 00:08:44.741 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:08:44.741 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:08:44.741 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 243], 00:08:44.741 | 99.00th=[ 302], 99.50th=[ 359], 99.90th=[ 371], 99.95th=[ 371], 00:08:44.741 | 99.99th=[ 371] 00:08:44.741 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:44.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:44.741 lat (usec) : 250=92.88%, 500=3.00% 00:08:44.741 lat (msec) : 50=4.12% 00:08:44.741 cpu : usr=1.16%, sys=0.48%, ctx=538, majf=0, minf=1 00:08:44.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:44.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.741 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:44.741 00:08:44.741 Run status group 0 (all jobs): 00:08:44.741 READ: bw=84.8KiB/s (86.8kB/s), 84.8KiB/s-84.8KiB/s (86.8kB/s-86.8kB/s), io=88.0KiB (90.1kB), run=1038-1038msec 00:08:44.741 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:08:44.741 00:08:44.741 Disk stats (read/write): 00:08:44.741 nvme0n1: ios=72/512, merge=0/0, ticks=1073/101, in_queue=1174, util=95.79% 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.741 rmmod nvme_tcp 00:08:44.741 rmmod nvme_fabrics 00:08:44.741 rmmod nvme_keyring 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1446613 ']' 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1446613 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1446613 ']' 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1446613 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1446613 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1446613' 00:08:44.741 killing process with pid 1446613 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1446613 00:08:44.741 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1446613 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.001 10:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.906 00:08:46.906 real 0m9.118s 00:08:46.906 user 0m20.583s 00:08:46.906 sys 0m1.951s 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:46.906 ************************************ 00:08:46.906 END TEST nvmf_nmic 00:08:46.906 ************************************ 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.906 10:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.166 ************************************ 00:08:47.166 START TEST nvmf_fio_target 00:08:47.166 ************************************ 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.166 * Looking for test storage... 00:08:47.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.166 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.167 10:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.073 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:49.074 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:49.074 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:49.074 Found net devices under 0000:08:00.0: cvl_0_0 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:49.074 Found net devices under 0000:08:00.1: cvl_0_1 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:08:49.074 00:08:49.074 --- 10.0.0.2 ping statistics --- 00:08:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.074 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:49.074 00:08:49.074 --- 10.0.0.1 ping statistics --- 00:08:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.074 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:49.074 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1448705 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1448705 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1448705 ']' 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.075 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.075 [2024-07-25 10:15:38.552378] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:08:49.075 [2024-07-25 10:15:38.552486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.075 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.075 [2024-07-25 10:15:38.617574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.075 [2024-07-25 10:15:38.734617] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.075 [2024-07-25 10:15:38.734674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.075 [2024-07-25 10:15:38.734690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.075 [2024-07-25 10:15:38.734703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.075 [2024-07-25 10:15:38.734715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.075 [2024-07-25 10:15:38.734809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.075 [2024-07-25 10:15:38.738502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.075 [2024-07-25 10:15:38.738577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.075 [2024-07-25 10:15:38.738610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.333 10:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.590 [2024-07-25 10:15:39.158635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.590 10:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.848 10:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:49.848 10:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.340 10:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:50.340 10:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.597 10:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:50.597 10:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.853 10:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:50.853 10:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:51.112 10:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.371 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:51.371 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.629 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:51.629 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.195 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:52.195 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:52.453 10:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.710 10:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:52.710 10:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.968 10:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:52.968 10:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.225 10:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.483 [2024-07-25 10:15:43.165639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.483 10:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:53.741 10:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:54.000 10:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:54.567 10:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:57.098 10:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:57.098 [global] 00:08:57.098 thread=1 00:08:57.098 invalidate=1 00:08:57.098 rw=write 00:08:57.098 time_based=1 00:08:57.098 runtime=1 00:08:57.098 ioengine=libaio 00:08:57.098 direct=1 00:08:57.098 bs=4096 00:08:57.098 iodepth=1 00:08:57.098 norandommap=0 00:08:57.098 numjobs=1 00:08:57.098 00:08:57.098 verify_dump=1 00:08:57.098 verify_backlog=512 00:08:57.098 verify_state_save=0 00:08:57.098 do_verify=1 00:08:57.098 verify=crc32c-intel 00:08:57.098 [job0] 00:08:57.098 filename=/dev/nvme0n1 00:08:57.098 [job1] 00:08:57.098 filename=/dev/nvme0n2 00:08:57.098 [job2] 00:08:57.098 filename=/dev/nvme0n3 00:08:57.098 [job3] 00:08:57.098 filename=/dev/nvme0n4 00:08:57.098 Could not set queue depth (nvme0n1) 00:08:57.098 Could not set queue depth (nvme0n2) 00:08:57.098 Could not set queue depth (nvme0n3) 00:08:57.098 Could not set queue depth (nvme0n4) 00:08:57.098 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.098 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.098 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.098 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.098 fio-3.35 00:08:57.098 Starting 4 threads 00:08:58.033 00:08:58.033 job0: (groupid=0, jobs=1): err= 0: pid=1449550: Thu Jul 25 10:15:47 2024 00:08:58.033 read: IOPS=1029, BW=4120KiB/s (4219kB/s)(4124KiB/1001msec) 00:08:58.033 slat (nsec): min=6207, max=36724, avg=10850.24, stdev=4824.57 00:08:58.033 clat (usec): min=261, max=41113, avg=579.16, stdev=3341.18 00:08:58.033 lat (usec): min=268, max=41132, avg=590.01, stdev=3342.71 00:08:58.033 clat percentiles (usec): 00:08:58.033 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:08:58.033 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:08:58.033 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 338], 00:08:58.033 | 99.00th=[ 453], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:58.033 | 99.99th=[41157] 00:08:58.033 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:58.033 slat (nsec): min=8213, max=52283, avg=17851.52, stdev=7036.65 00:08:58.033 clat (usec): min=163, max=1737, avg=230.12, stdev=60.76 00:08:58.033 lat (usec): min=172, max=1773, avg=247.97, stdev=64.04 00:08:58.033 clat percentiles (usec): 00:08:58.033 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 198], 00:08:58.033 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 233], 00:08:58.033 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 306], 00:08:58.033 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 1172], 99.95th=[ 1745], 00:08:58.033 | 99.99th=[ 1745] 00:08:58.033 bw ( KiB/s): min= 4096, max= 4096, per=25.40%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.033 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.033 lat (usec) : 250=46.44%, 500=53.14%, 1000=0.04% 00:08:58.033 lat (msec) : 2=0.08%, 4=0.04%, 50=0.27% 00:08:58.033 cpu : usr=3.00%, sys=5.10%, ctx=2569, majf=0, minf=1 00:08:58.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.033 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.033 job1: (groupid=0, jobs=1): err= 0: pid=1449551: Thu Jul 25 10:15:47 2024 00:08:58.033 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:08:58.033 slat (nsec): min=15439, max=35667, avg=25995.67, stdev=8125.46 00:08:58.033 clat (usec): min=40944, max=41990, avg=41465.01, stdev=497.50 00:08:58.033 lat (usec): min=40978, max=42017, avg=41491.00, stdev=499.05 00:08:58.033 clat percentiles (usec): 00:08:58.033 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:58.033 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:08:58.033 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:58.033 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:58.033 | 99.99th=[42206] 00:08:58.033 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:08:58.033 slat (nsec): min=8073, max=47118, avg=20739.97, stdev=4859.41 00:08:58.033 clat (usec): min=198, max=840, avg=246.43, stdev=42.41 00:08:58.033 lat (usec): min=218, max=861, avg=267.17, stdev=43.00 00:08:58.033 clat percentiles (usec): 00:08:58.033 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:08:58.033 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:08:58.033 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:08:58.033 | 99.00th=[ 343], 99.50th=[ 611], 99.90th=[ 840], 99.95th=[ 840], 00:08:58.033 | 99.99th=[ 840] 00:08:58.033 bw ( KiB/s): min= 4096, max= 4096, per=25.40%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.033 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.033 lat (usec) : 250=63.79%, 500=31.71%, 750=0.38%, 1000=0.19% 00:08:58.033 lat (msec) : 50=3.94% 00:08:58.033 cpu : usr=0.79%, sys=1.39%, ctx=533, majf=0, minf=1 00:08:58.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.033 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.033 job2: (groupid=0, jobs=1): err= 0: pid=1449552: Thu Jul 25 10:15:47 2024 00:08:58.033 read: IOPS=693, BW=2773KiB/s (2840kB/s)(2776KiB/1001msec) 00:08:58.033 slat (nsec): min=6414, max=35273, avg=12410.74, stdev=5207.54 00:08:58.033 clat (usec): min=249, max=42966, avg=945.43, stdev=5101.32 00:08:58.033 lat (usec): min=256, max=42988, avg=957.84, stdev=5102.67 00:08:58.033 clat percentiles (usec): 00:08:58.033 | 1.00th=[ 253], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:08:58.033 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:08:58.033 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 359], 00:08:58.034 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:08:58.034 | 99.99th=[42730] 00:08:58.034 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:08:58.034 slat (usec): min=10, max=32408, avg=80.24, stdev=1304.17 00:08:58.034 clat (usec): min=188, max=424, avg=238.93, stdev=30.64 00:08:58.034 lat (usec): min=208, max=32790, avg=319.17, stdev=1310.29 00:08:58.034 clat percentiles (usec): 00:08:58.034 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:08:58.034 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:08:58.034 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:08:58.034 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 424], 00:08:58.034 | 99.99th=[ 424] 00:08:58.034 bw ( KiB/s): min= 4096, max= 4096, per=25.40%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.034 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.034 lat (usec) : 250=44.99%, 500=54.31%, 750=0.06% 00:08:58.034 lat (msec) : 50=0.64% 00:08:58.034 cpu : usr=3.40%, sys=3.10%, ctx=1721, majf=0, minf=1 00:08:58.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.034 issued rwts: total=694,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.034 job3: (groupid=0, jobs=1): err= 0: pid=1449553: Thu Jul 25 10:15:47 2024 00:08:58.034 read: IOPS=545, BW=2181KiB/s (2233kB/s)(2216KiB/1016msec) 00:08:58.034 slat (nsec): min=6442, max=35480, avg=8683.46, stdev=4340.70 00:08:58.034 clat (usec): min=252, max=41277, avg=1276.52, stdev=6157.61 00:08:58.034 lat (usec): min=260, max=41295, avg=1285.21, stdev=6160.72 00:08:58.034 clat percentiles (usec): 00:08:58.034 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:08:58.034 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 347], 00:08:58.034 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 392], 95.00th=[ 416], 00:08:58.034 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:58.034 | 99.99th=[41157] 00:08:58.034 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:08:58.034 slat (nsec): min=8632, max=66427, avg=19606.10, stdev=6526.07 00:08:58.034 clat (usec): min=198, max=1229, avg=270.38, stdev=60.39 00:08:58.034 lat (usec): min=208, max=1252, avg=289.99, stdev=61.14 00:08:58.034 clat percentiles (usec): 00:08:58.034 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:08:58.034 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:08:58.034 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 00:08:58.034 | 99.00th=[ 408], 99.50th=[ 685], 99.90th=[ 1004], 99.95th=[ 1237], 00:08:58.034 | 99.99th=[ 1237] 00:08:58.034 bw ( KiB/s): min= 4096, max= 4096, per=25.40%, avg=4096.00, stdev= 0.00, samples=2 00:08:58.034 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:08:58.034 lat (usec) : 250=19.84%, 500=78.58%, 750=0.44%, 1000=0.25% 00:08:58.034 lat (msec) : 2=0.06%, 50=0.82% 00:08:58.034 cpu : usr=1.97%, sys=2.86%, ctx=1579, majf=0, minf=1 00:08:58.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.034 issued rwts: total=554,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.034 00:08:58.034 Run status group 0 (all jobs): 00:08:58.034 READ: bw=9055KiB/s (9272kB/s), 83.1KiB/s-4120KiB/s (85.1kB/s-4219kB/s), io=9200KiB (9421kB), run=1001-1016msec 00:08:58.034 WRITE: bw=15.7MiB/s (16.5MB/s), 2026KiB/s-6138KiB/s (2074kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1016msec 00:08:58.034 00:08:58.034 Disk stats (read/write): 00:08:58.034 nvme0n1: ios=1037/1024, merge=0/0, ticks=701/219, in_queue=920, util=85.57% 00:08:58.034 nvme0n2: ios=67/512, merge=0/0, ticks=776/122, in_queue=898, util=91.16% 00:08:58.034 nvme0n3: ios=565/670, merge=0/0, ticks=1054/155, in_queue=1209, util=94.68% 00:08:58.034 nvme0n4: ios=605/1024, merge=0/0, ticks=755/257, in_queue=1012, util=94.11% 00:08:58.034 10:15:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:58.034 [global] 00:08:58.034 thread=1 00:08:58.034 invalidate=1 00:08:58.034 rw=randwrite 00:08:58.034 time_based=1 00:08:58.034 runtime=1 00:08:58.034 ioengine=libaio 00:08:58.034 direct=1 00:08:58.034 bs=4096 00:08:58.034 iodepth=1 00:08:58.034 norandommap=0 00:08:58.034 numjobs=1 00:08:58.034 00:08:58.034 verify_dump=1 00:08:58.034 verify_backlog=512 00:08:58.034 verify_state_save=0 00:08:58.034 do_verify=1 00:08:58.034 verify=crc32c-intel 00:08:58.034 [job0] 00:08:58.034 filename=/dev/nvme0n1 00:08:58.034 [job1] 00:08:58.034 filename=/dev/nvme0n2 00:08:58.034 [job2] 00:08:58.034 filename=/dev/nvme0n3 00:08:58.034 [job3] 00:08:58.034 filename=/dev/nvme0n4 00:08:58.292 Could not set queue depth (nvme0n1) 00:08:58.292 Could not set queue depth (nvme0n2) 00:08:58.292 Could not set queue depth (nvme0n3) 00:08:58.292 Could not set queue depth (nvme0n4) 00:08:58.292 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.292 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.292 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.292 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.292 fio-3.35 00:08:58.292 Starting 4 threads 00:08:59.717 00:08:59.717 job0: (groupid=0, jobs=1): err= 0: pid=1449729: Thu Jul 25 10:15:49 2024 00:08:59.717 read: IOPS=511, BW=2045KiB/s (2095kB/s)(2068KiB/1011msec) 00:08:59.717 slat (nsec): min=5737, max=38003, avg=12309.61, stdev=3312.36 00:08:59.717 clat (usec): min=293, max=42017, avg=1361.61, stdev=6196.63 00:08:59.717 lat (usec): min=306, max=42037, avg=1373.92, stdev=6197.96 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 318], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 379], 00:08:59.717 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 412], 00:08:59.717 | 70.00th=[ 420], 80.00th=[ 429], 90.00th=[ 437], 95.00th=[ 457], 00:08:59.717 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:59.717 | 99.99th=[42206] 00:08:59.717 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:08:59.717 slat (nsec): min=7178, max=47888, avg=15518.97, stdev=5331.47 00:08:59.717 clat (usec): min=178, max=2221, avg=271.67, stdev=76.40 00:08:59.717 lat (usec): min=185, max=2230, avg=287.19, stdev=76.27 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 231], 00:08:59.717 | 30.00th=[ 243], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:08:59.717 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 351], 00:08:59.717 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 676], 99.95th=[ 2212], 00:08:59.717 | 99.99th=[ 2212] 00:08:59.717 bw ( KiB/s): min= 4096, max= 4096, per=17.43%, avg=4096.00, stdev= 0.00, samples=2 00:08:59.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:08:59.717 lat (usec) : 250=22.65%, 500=75.99%, 750=0.45% 00:08:59.717 lat (msec) : 4=0.13%, 50=0.78% 00:08:59.717 cpu : usr=1.68%, sys=1.78%, ctx=1542, majf=0, minf=1 00:08:59.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 issued rwts: total=517,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.717 job1: (groupid=0, jobs=1): err= 0: pid=1449730: Thu Jul 25 10:15:49 2024 00:08:59.717 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:59.717 slat (nsec): min=5868, max=57296, avg=12297.55, stdev=4927.20 00:08:59.717 clat (usec): min=257, max=1093, avg=339.23, stdev=60.49 00:08:59.717 lat (usec): min=265, max=1101, avg=351.52, stdev=60.49 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:08:59.717 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:08:59.717 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 445], 00:08:59.717 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 922], 99.95th=[ 1090], 00:08:59.717 | 99.99th=[ 1090] 00:08:59.717 write: IOPS=1840, BW=7361KiB/s (7537kB/s)(7368KiB/1001msec); 0 zone resets 00:08:59.717 slat (nsec): min=7497, max=54905, avg=14056.36, stdev=5567.62 00:08:59.717 clat (usec): min=170, max=1508, avg=228.34, stdev=58.68 00:08:59.717 lat (usec): min=186, max=1520, avg=242.40, stdev=59.29 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 202], 00:08:59.717 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:08:59.717 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 297], 00:08:59.717 | 99.00th=[ 383], 99.50th=[ 523], 99.90th=[ 1012], 99.95th=[ 1516], 00:08:59.717 | 99.99th=[ 1516] 00:08:59.717 bw ( KiB/s): min= 8192, max= 8192, per=34.87%, avg=8192.00, stdev= 0.00, samples=1 00:08:59.717 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:59.717 lat (usec) : 250=47.57%, 500=51.39%, 750=0.77%, 1000=0.18% 00:08:59.717 lat (msec) : 2=0.09% 00:08:59.717 cpu : usr=3.20%, sys=5.80%, ctx=3380, majf=0, minf=1 00:08:59.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 issued rwts: total=1536,1842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.717 job2: (groupid=0, jobs=1): err= 0: pid=1449731: Thu Jul 25 10:15:49 2024 00:08:59.717 read: IOPS=1366, BW=5467KiB/s (5598kB/s)(5472KiB/1001msec) 00:08:59.717 slat (nsec): min=5215, max=49784, avg=12584.98, stdev=5802.29 00:08:59.717 clat (usec): min=259, max=40877, avg=429.73, stdev=1095.73 00:08:59.717 lat (usec): min=266, max=40884, avg=442.31, stdev=1095.58 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 293], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 355], 00:08:59.717 | 30.00th=[ 367], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 404], 00:08:59.717 | 70.00th=[ 429], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 486], 00:08:59.717 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[40633], 00:08:59.717 | 99.99th=[40633] 00:08:59.717 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:59.717 slat (nsec): min=6522, max=34715, avg=11805.50, stdev=4577.78 00:08:59.717 clat (usec): min=174, max=405, avg=239.04, stdev=32.61 00:08:59.717 lat (usec): min=181, max=413, avg=250.85, stdev=33.35 00:08:59.717 clat percentiles (usec): 00:08:59.717 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:08:59.717 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:08:59.717 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 306], 00:08:59.717 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 388], 99.95th=[ 404], 00:08:59.717 | 99.99th=[ 404] 00:08:59.717 bw ( KiB/s): min= 7872, max= 7872, per=33.51%, avg=7872.00, stdev= 0.00, samples=1 00:08:59.717 iops : min= 1968, max= 1968, avg=1968.00, stdev= 0.00, samples=1 00:08:59.717 lat (usec) : 250=38.60%, 500=59.78%, 750=1.58% 00:08:59.717 lat (msec) : 50=0.03% 00:08:59.717 cpu : usr=1.50%, sys=4.20%, ctx=2904, majf=0, minf=1 00:08:59.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.717 issued rwts: total=1368,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.717 job3: (groupid=0, jobs=1): err= 0: pid=1449732: Thu Jul 25 10:15:49 2024 00:08:59.717 read: IOPS=1412, BW=5650KiB/s (5786kB/s)(5656KiB/1001msec) 00:08:59.717 slat (nsec): min=4993, max=44605, avg=11825.94, stdev=5319.57 00:08:59.717 clat (usec): min=289, max=586, avg=393.48, stdev=49.49 00:08:59.717 lat (usec): min=294, max=598, avg=405.30, stdev=51.50 00:08:59.717 clat percentiles (usec): 00:08:59.718 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 355], 00:08:59.718 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:08:59.718 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 502], 00:08:59.718 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[ 586], 00:08:59.718 | 99.99th=[ 586] 00:08:59.718 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:59.718 slat (nsec): min=6538, max=40356, avg=11849.68, stdev=5158.23 00:08:59.718 clat (usec): min=175, max=558, avg=259.49, stdev=33.80 00:08:59.718 lat (usec): min=186, max=576, avg=271.34, stdev=35.95 00:08:59.718 clat percentiles (usec): 00:08:59.718 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:08:59.718 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:08:59.718 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:08:59.718 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 412], 99.95th=[ 562], 00:08:59.718 | 99.99th=[ 562] 00:08:59.718 bw ( KiB/s): min= 8192, max= 8192, per=34.87%, avg=8192.00, stdev= 0.00, samples=1 00:08:59.718 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:59.718 lat (usec) : 250=24.03%, 500=73.46%, 750=2.51% 00:08:59.718 cpu : usr=2.20%, sys=3.50%, ctx=2950, majf=0, minf=1 00:08:59.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.718 issued rwts: total=1414,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.718 00:08:59.718 Run status group 0 (all jobs): 00:08:59.718 READ: bw=18.7MiB/s (19.6MB/s), 2045KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=18.9MiB (19.8MB), run=1001-1011msec 00:08:59.718 WRITE: bw=22.9MiB/s (24.1MB/s), 4051KiB/s-7361KiB/s (4149kB/s-7537kB/s), io=23.2MiB (24.3MB), run=1001-1011msec 00:08:59.718 00:08:59.718 Disk stats (read/write): 00:08:59.718 nvme0n1: ios=556/1024, merge=0/0, ticks=826/272, in_queue=1098, util=95.99% 00:08:59.718 nvme0n2: ios=1381/1536, merge=0/0, ticks=462/339, in_queue=801, util=87.61% 00:08:59.718 nvme0n3: ios=1024/1488, merge=0/0, ticks=451/360, in_queue=811, util=88.96% 00:08:59.718 nvme0n4: ios=1038/1536, merge=0/0, ticks=403/396, in_queue=799, util=89.61% 00:08:59.718 10:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:59.718 [global] 00:08:59.718 thread=1 00:08:59.718 invalidate=1 00:08:59.718 rw=write 00:08:59.718 time_based=1 00:08:59.718 runtime=1 00:08:59.718 ioengine=libaio 00:08:59.718 direct=1 00:08:59.718 bs=4096 00:08:59.718 iodepth=128 00:08:59.718 norandommap=0 00:08:59.718 numjobs=1 00:08:59.718 00:08:59.718 verify_dump=1 00:08:59.718 verify_backlog=512 00:08:59.718 verify_state_save=0 00:08:59.718 do_verify=1 00:08:59.718 verify=crc32c-intel 00:08:59.718 [job0] 00:08:59.718 filename=/dev/nvme0n1 00:08:59.718 [job1] 00:08:59.718 filename=/dev/nvme0n2 00:08:59.718 [job2] 00:08:59.718 filename=/dev/nvme0n3 00:08:59.718 [job3] 00:08:59.718 filename=/dev/nvme0n4 00:08:59.718 Could not set queue depth (nvme0n1) 00:08:59.718 Could not set queue depth (nvme0n2) 00:08:59.718 Could not set queue depth (nvme0n3) 00:08:59.718 Could not set queue depth (nvme0n4) 00:08:59.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.718 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.718 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.718 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.718 fio-3.35 00:08:59.718 Starting 4 threads 00:09:01.092 00:09:01.092 job0: (groupid=0, jobs=1): err= 0: pid=1450004: Thu Jul 25 10:15:50 2024 00:09:01.092 read: IOPS=4042, BW=15.8MiB/s (16.6MB/s)(16.1MiB/1021msec) 00:09:01.092 slat (usec): min=2, max=11746, avg=114.63, stdev=697.62 00:09:01.092 clat (usec): min=948, max=48568, avg=15473.04, stdev=6444.31 00:09:01.092 lat (usec): min=961, max=48573, avg=15587.67, stdev=6467.71 00:09:01.092 clat percentiles (usec): 00:09:01.092 | 1.00th=[ 4047], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11469], 00:09:01.092 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:09:01.092 | 70.00th=[17433], 80.00th=[21365], 90.00th=[26346], 95.00th=[28967], 00:09:01.092 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:01.092 | 99.99th=[48497] 00:09:01.092 write: IOPS=4643, BW=18.1MiB/s (19.0MB/s)(18.5MiB/1021msec); 0 zone resets 00:09:01.092 slat (usec): min=3, max=22867, avg=95.67, stdev=657.73 00:09:01.092 clat (usec): min=544, max=100107, avg=13871.26, stdev=14116.82 00:09:01.092 lat (usec): min=557, max=100139, avg=13966.93, stdev=14139.91 00:09:01.092 clat percentiles (usec): 00:09:01.092 | 1.00th=[ 873], 5.00th=[ 955], 10.00th=[ 4490], 20.00th=[ 9372], 00:09:01.092 | 30.00th=[ 9765], 40.00th=[ 10552], 50.00th=[ 11338], 60.00th=[ 11731], 00:09:01.092 | 70.00th=[ 12256], 80.00th=[ 16712], 90.00th=[ 21365], 95.00th=[ 22938], 00:09:01.092 | 99.00th=[ 99091], 99.50th=[100140], 99.90th=[100140], 99.95th=[100140], 00:09:01.092 | 99.99th=[100140] 00:09:01.092 bw ( KiB/s): min=15624, max=21280, per=31.66%, avg=18452.00, stdev=3999.40, samples=2 00:09:01.092 iops : min= 3906, max= 5320, avg=4613.00, stdev=999.85, samples=2 00:09:01.092 lat (usec) : 750=0.16%, 1000=3.77% 00:09:01.092 lat (msec) : 2=1.29%, 4=0.37%, 10=15.46%, 20=60.74%, 50=16.90% 00:09:01.092 lat (msec) : 100=1.24%, 250=0.08% 00:09:01.092 cpu : usr=4.80%, sys=6.27%, ctx=455, majf=0, minf=1 00:09:01.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:01.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.092 issued rwts: total=4127,4741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.092 job1: (groupid=0, jobs=1): err= 0: pid=1450005: Thu Jul 25 10:15:50 2024 00:09:01.092 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:09:01.092 slat (usec): min=2, max=11239, avg=100.04, stdev=680.58 00:09:01.092 clat (usec): min=4436, max=44988, avg=13449.66, stdev=4907.41 00:09:01.092 lat (usec): min=4446, max=45021, avg=13549.70, stdev=4929.71 00:09:01.092 clat percentiles (usec): 00:09:01.092 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11207], 00:09:01.092 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:09:01.092 | 70.00th=[13566], 80.00th=[14877], 90.00th=[18220], 95.00th=[21890], 00:09:01.092 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:09:01.092 | 99.99th=[44827] 00:09:01.092 write: IOPS=4964, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1013msec); 0 zone resets 00:09:01.092 slat (usec): min=4, max=35998, avg=99.36, stdev=858.66 00:09:01.092 clat (usec): min=1282, max=69525, avg=13080.18, stdev=5277.06 00:09:01.092 lat (usec): min=1297, max=69557, avg=13179.54, stdev=5353.71 00:09:01.093 clat percentiles (usec): 00:09:01.093 | 1.00th=[ 3982], 5.00th=[ 7046], 10.00th=[ 8586], 20.00th=[10290], 00:09:01.093 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:09:01.093 | 70.00th=[13173], 80.00th=[14746], 90.00th=[17695], 95.00th=[21365], 00:09:01.093 | 99.00th=[34341], 99.50th=[38011], 99.90th=[38011], 99.95th=[38536], 00:09:01.093 | 99.99th=[69731] 00:09:01.093 bw ( KiB/s): min=18728, max=20480, per=33.64%, avg=19604.00, stdev=1238.85, samples=2 00:09:01.093 iops : min= 4682, max= 5120, avg=4901.00, stdev=309.71, samples=2 00:09:01.093 lat (msec) : 2=0.04%, 4=0.50%, 10=12.73%, 20=79.54%, 50=7.18% 00:09:01.093 lat (msec) : 100=0.01% 00:09:01.093 cpu : usr=3.26%, sys=7.90%, ctx=448, majf=0, minf=1 00:09:01.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:01.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.093 issued rwts: total=4608,5029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.093 job2: (groupid=0, jobs=1): err= 0: pid=1450007: Thu Jul 25 10:15:50 2024 00:09:01.093 read: IOPS=1845, BW=7382KiB/s (7559kB/s)(7544KiB/1022msec) 00:09:01.093 slat (usec): min=2, max=34004, avg=320.88, stdev=2097.18 00:09:01.093 clat (usec): min=10411, max=90534, avg=35953.23, stdev=19397.91 00:09:01.093 lat (usec): min=10696, max=90540, avg=36274.11, stdev=19496.04 00:09:01.093 clat percentiles (usec): 00:09:01.093 | 1.00th=[10814], 5.00th=[13698], 10.00th=[13960], 20.00th=[15926], 00:09:01.093 | 30.00th=[25035], 40.00th=[29492], 50.00th=[31065], 60.00th=[35390], 00:09:01.093 | 70.00th=[43254], 80.00th=[51643], 90.00th=[65274], 95.00th=[74974], 00:09:01.093 | 99.00th=[85459], 99.50th=[85459], 99.90th=[90702], 99.95th=[90702], 00:09:01.093 | 99.99th=[90702] 00:09:01.093 write: IOPS=2003, BW=8016KiB/s (8208kB/s)(8192KiB/1022msec); 0 zone resets 00:09:01.093 slat (usec): min=4, max=27482, avg=186.19, stdev=1103.90 00:09:01.093 clat (msec): min=9, max=128, avg=29.44, stdev=21.83 00:09:01.093 lat (msec): min=9, max=128, avg=29.62, stdev=21.90 00:09:01.093 clat percentiles (msec): 00:09:01.093 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:09:01.093 | 30.00th=[ 16], 40.00th=[ 19], 50.00th=[ 24], 60.00th=[ 24], 00:09:01.093 | 70.00th=[ 30], 80.00th=[ 41], 90.00th=[ 59], 95.00th=[ 83], 00:09:01.093 | 99.00th=[ 105], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:09:01.093 | 99.99th=[ 129] 00:09:01.093 bw ( KiB/s): min= 8192, max= 8192, per=14.06%, avg=8192.00, stdev= 0.00, samples=2 00:09:01.093 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:01.093 lat (msec) : 10=0.58%, 20=33.55%, 50=48.70%, 100=15.96%, 250=1.19% 00:09:01.093 cpu : usr=1.96%, sys=3.13%, ctx=175, majf=0, minf=1 00:09:01.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:01.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.093 issued rwts: total=1886,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.093 job3: (groupid=0, jobs=1): err= 0: pid=1450008: Thu Jul 25 10:15:50 2024 00:09:01.093 read: IOPS=2750, BW=10.7MiB/s (11.3MB/s)(10.9MiB/1016msec) 00:09:01.093 slat (usec): min=4, max=26096, avg=169.17, stdev=1206.58 00:09:01.093 clat (usec): min=7031, max=63128, avg=20507.66, stdev=9184.79 00:09:01.093 lat (usec): min=7047, max=67837, avg=20676.83, stdev=9274.13 00:09:01.093 clat percentiles (usec): 00:09:01.093 | 1.00th=[10814], 5.00th=[13698], 10.00th=[13960], 20.00th=[14877], 00:09:01.093 | 30.00th=[15664], 40.00th=[16712], 50.00th=[16909], 60.00th=[17957], 00:09:01.093 | 70.00th=[19268], 80.00th=[23987], 90.00th=[31065], 95.00th=[45351], 00:09:01.093 | 99.00th=[53216], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:09:01.093 | 99.99th=[63177] 00:09:01.093 write: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec); 0 zone resets 00:09:01.093 slat (usec): min=4, max=16529, avg=154.33, stdev=910.14 00:09:01.093 clat (usec): min=1204, max=98191, avg=23149.10, stdev=16205.13 00:09:01.093 lat (usec): min=1214, max=98212, avg=23303.43, stdev=16309.49 00:09:01.093 clat percentiles (usec): 00:09:01.093 | 1.00th=[ 5276], 5.00th=[10421], 10.00th=[12780], 20.00th=[14615], 00:09:01.093 | 30.00th=[15664], 40.00th=[16188], 50.00th=[17171], 60.00th=[19006], 00:09:01.093 | 70.00th=[21890], 80.00th=[30540], 90.00th=[42730], 95.00th=[47973], 00:09:01.093 | 99.00th=[92799], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:09:01.093 | 99.99th=[98042] 00:09:01.093 bw ( KiB/s): min= 8208, max=16368, per=21.09%, avg=12288.00, stdev=5769.99, samples=2 00:09:01.093 iops : min= 2052, max= 4092, avg=3072.00, stdev=1442.50, samples=2 00:09:01.093 lat (msec) : 2=0.05%, 10=2.59%, 20=65.16%, 50=29.08%, 100=3.12% 00:09:01.093 cpu : usr=4.53%, sys=7.39%, ctx=315, majf=0, minf=1 00:09:01.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:01.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.093 issued rwts: total=2794,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.093 00:09:01.093 Run status group 0 (all jobs): 00:09:01.093 READ: bw=51.3MiB/s (53.8MB/s), 7382KiB/s-17.8MiB/s (7559kB/s-18.6MB/s), io=52.4MiB (54.9MB), run=1013-1022msec 00:09:01.093 WRITE: bw=56.9MiB/s (59.7MB/s), 8016KiB/s-19.4MiB/s (8208kB/s-20.3MB/s), io=58.2MiB (61.0MB), run=1013-1022msec 00:09:01.093 00:09:01.093 Disk stats (read/write): 00:09:01.093 nvme0n1: ios=3634/4001, merge=0/0, ticks=14311/12532, in_queue=26843, util=91.58% 00:09:01.093 nvme0n2: ios=4146/4466, merge=0/0, ticks=34645/32679, in_queue=67324, util=97.76% 00:09:01.093 nvme0n3: ios=1236/1536, merge=0/0, ticks=16275/12185, in_queue=28460, util=96.87% 00:09:01.093 nvme0n4: ios=2612/2671, merge=0/0, ticks=47854/52477, in_queue=100331, util=98.63% 00:09:01.093 10:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:01.093 [global] 00:09:01.093 thread=1 00:09:01.093 invalidate=1 00:09:01.093 rw=randwrite 00:09:01.093 time_based=1 00:09:01.093 runtime=1 00:09:01.093 ioengine=libaio 00:09:01.093 direct=1 00:09:01.093 bs=4096 00:09:01.093 iodepth=128 00:09:01.093 norandommap=0 00:09:01.093 numjobs=1 00:09:01.093 00:09:01.093 verify_dump=1 00:09:01.093 verify_backlog=512 00:09:01.093 verify_state_save=0 00:09:01.093 do_verify=1 00:09:01.093 verify=crc32c-intel 00:09:01.093 [job0] 00:09:01.093 filename=/dev/nvme0n1 00:09:01.093 [job1] 00:09:01.093 filename=/dev/nvme0n2 00:09:01.093 [job2] 00:09:01.093 filename=/dev/nvme0n3 00:09:01.093 [job3] 00:09:01.093 filename=/dev/nvme0n4 00:09:01.093 Could not set queue depth (nvme0n1) 00:09:01.093 Could not set queue depth (nvme0n2) 00:09:01.093 Could not set queue depth (nvme0n3) 00:09:01.093 Could not set queue depth (nvme0n4) 00:09:01.352 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.352 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.352 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.352 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.352 fio-3.35 00:09:01.352 Starting 4 threads 00:09:02.728 00:09:02.728 job0: (groupid=0, jobs=1): err= 0: pid=1450193: Thu Jul 25 10:15:52 2024 00:09:02.728 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:09:02.728 slat (usec): min=4, max=7195, avg=102.91, stdev=562.85 00:09:02.728 clat (usec): min=7576, max=32374, avg=13450.68, stdev=3294.88 00:09:02.728 lat (usec): min=7593, max=32391, avg=13553.59, stdev=3344.35 00:09:02.728 clat percentiles (usec): 00:09:02.728 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11600], 00:09:02.728 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:09:02.728 | 70.00th=[13173], 80.00th=[14746], 90.00th=[18220], 95.00th=[20841], 00:09:02.728 | 99.00th=[25035], 99.50th=[27132], 99.90th=[32375], 99.95th=[32375], 00:09:02.728 | 99.99th=[32375] 00:09:02.728 write: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec); 0 zone resets 00:09:02.728 slat (usec): min=4, max=11101, avg=98.09, stdev=548.80 00:09:02.728 clat (usec): min=4717, max=47125, avg=13814.23, stdev=5050.88 00:09:02.728 lat (usec): min=6698, max=47139, avg=13912.32, stdev=5092.56 00:09:02.728 clat percentiles (usec): 00:09:02.728 | 1.00th=[ 8094], 5.00th=[10290], 10.00th=[11076], 20.00th=[11469], 00:09:02.728 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:09:02.728 | 70.00th=[13304], 80.00th=[14746], 90.00th=[17433], 95.00th=[20317], 00:09:02.728 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:09:02.728 | 99.99th=[46924] 00:09:02.728 bw ( KiB/s): min=18040, max=18872, per=28.29%, avg=18456.00, stdev=588.31, samples=2 00:09:02.728 iops : min= 4510, max= 4718, avg=4614.00, stdev=147.08, samples=2 00:09:02.728 lat (msec) : 10=4.78%, 20=89.83%, 50=5.38% 00:09:02.728 cpu : usr=8.43%, sys=12.20%, ctx=391, majf=0, minf=11 00:09:02.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:02.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.728 issued rwts: total=4608,4735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.728 job1: (groupid=0, jobs=1): err= 0: pid=1450194: Thu Jul 25 10:15:52 2024 00:09:02.728 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:02.728 slat (usec): min=3, max=11611, avg=142.06, stdev=870.05 00:09:02.728 clat (usec): min=7573, max=43983, avg=17923.85, stdev=6942.65 00:09:02.728 lat (usec): min=7583, max=43994, avg=18065.91, stdev=6980.31 00:09:02.728 clat percentiles (usec): 00:09:02.728 | 1.00th=[ 9241], 5.00th=[11600], 10.00th=[12649], 20.00th=[13698], 00:09:02.728 | 30.00th=[14091], 40.00th=[14484], 50.00th=[15139], 60.00th=[15926], 00:09:02.728 | 70.00th=[17957], 80.00th=[20841], 90.00th=[30802], 95.00th=[32637], 00:09:02.728 | 99.00th=[40109], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:02.728 | 99.99th=[43779] 00:09:02.728 write: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1004msec); 0 zone resets 00:09:02.728 slat (usec): min=4, max=9005, avg=114.41, stdev=672.47 00:09:02.728 clat (usec): min=453, max=40277, avg=15779.93, stdev=5498.41 00:09:02.728 lat (usec): min=3329, max=40288, avg=15894.34, stdev=5525.93 00:09:02.728 clat percentiles (usec): 00:09:02.728 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[11469], 20.00th=[12387], 00:09:02.728 | 30.00th=[13042], 40.00th=[13698], 50.00th=[13960], 60.00th=[14877], 00:09:02.728 | 70.00th=[15795], 80.00th=[18744], 90.00th=[23462], 95.00th=[27132], 00:09:02.728 | 99.00th=[35914], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:09:02.728 | 99.99th=[40109] 00:09:02.728 bw ( KiB/s): min=12608, max=18040, per=23.49%, avg=15324.00, stdev=3841.00, samples=2 00:09:02.728 iops : min= 3152, max= 4510, avg=3831.00, stdev=960.25, samples=2 00:09:02.728 lat (usec) : 500=0.01% 00:09:02.728 lat (msec) : 4=0.36%, 10=3.30%, 20=79.37%, 50=16.96% 00:09:02.728 cpu : usr=4.99%, sys=7.08%, ctx=376, majf=0, minf=17 00:09:02.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.729 issued rwts: total=3584,3959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.729 job2: (groupid=0, jobs=1): err= 0: pid=1450195: Thu Jul 25 10:15:52 2024 00:09:02.729 read: IOPS=3901, BW=15.2MiB/s (16.0MB/s)(15.4MiB/1008msec) 00:09:02.729 slat (usec): min=3, max=17469, avg=127.39, stdev=705.97 00:09:02.729 clat (usec): min=2692, max=34545, avg=16572.11, stdev=4561.31 00:09:02.729 lat (usec): min=7756, max=42557, avg=16699.50, stdev=4565.08 00:09:02.729 clat percentiles (usec): 00:09:02.729 | 1.00th=[ 8455], 5.00th=[11863], 10.00th=[12649], 20.00th=[13960], 00:09:02.729 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15401], 60.00th=[16057], 00:09:02.729 | 70.00th=[16712], 80.00th=[18220], 90.00th=[23462], 95.00th=[27395], 00:09:02.729 | 99.00th=[32900], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:09:02.729 | 99.99th=[34341] 00:09:02.729 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:09:02.729 slat (usec): min=4, max=9822, avg=113.69, stdev=588.48 00:09:02.729 clat (usec): min=3922, max=29189, avg=15090.52, stdev=3451.20 00:09:02.729 lat (usec): min=7622, max=29199, avg=15204.21, stdev=3457.83 00:09:02.729 clat percentiles (usec): 00:09:02.729 | 1.00th=[10814], 5.00th=[11338], 10.00th=[11994], 20.00th=[13042], 00:09:02.729 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:09:02.729 | 70.00th=[15401], 80.00th=[16188], 90.00th=[17433], 95.00th=[23725], 00:09:02.729 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:09:02.729 | 99.99th=[29230] 00:09:02.729 bw ( KiB/s): min=16384, max=16384, per=25.12%, avg=16384.00, stdev= 0.00, samples=2 00:09:02.729 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:02.729 lat (msec) : 4=0.02%, 10=0.83%, 20=89.00%, 50=10.14% 00:09:02.729 cpu : usr=3.87%, sys=7.25%, ctx=452, majf=0, minf=9 00:09:02.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.729 issued rwts: total=3933,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.729 job3: (groupid=0, jobs=1): err= 0: pid=1450196: Thu Jul 25 10:15:52 2024 00:09:02.729 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:02.729 slat (usec): min=4, max=14230, avg=134.81, stdev=836.73 00:09:02.729 clat (usec): min=6157, max=30045, avg=17373.26, stdev=4379.72 00:09:02.729 lat (usec): min=6189, max=34095, avg=17508.08, stdev=4432.67 00:09:02.729 clat percentiles (usec): 00:09:02.729 | 1.00th=[10552], 5.00th=[11469], 10.00th=[13042], 20.00th=[14353], 00:09:02.729 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[17433], 00:09:02.729 | 70.00th=[19792], 80.00th=[22676], 90.00th=[23987], 95.00th=[24773], 00:09:02.729 | 99.00th=[27919], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:09:02.729 | 99.99th=[30016] 00:09:02.729 write: IOPS=3646, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1005msec); 0 zone resets 00:09:02.729 slat (usec): min=5, max=11559, avg=125.09, stdev=654.91 00:09:02.729 clat (usec): min=270, max=68790, avg=17793.99, stdev=10609.86 00:09:02.729 lat (usec): min=289, max=73605, avg=17919.07, stdev=10677.09 00:09:02.729 clat percentiles (usec): 00:09:02.729 | 1.00th=[ 2540], 5.00th=[ 5932], 10.00th=[ 9241], 20.00th=[12649], 00:09:02.729 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:09:02.729 | 70.00th=[17957], 80.00th=[21103], 90.00th=[24773], 95.00th=[36439], 00:09:02.729 | 99.00th=[64750], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:09:02.729 | 99.99th=[68682] 00:09:02.729 bw ( KiB/s): min=12048, max=16624, per=21.98%, avg=14336.00, stdev=3235.72, samples=2 00:09:02.729 iops : min= 3012, max= 4156, avg=3584.00, stdev=808.93, samples=2 00:09:02.729 lat (usec) : 500=0.04%, 750=0.10%, 1000=0.08% 00:09:02.729 lat (msec) : 2=0.08%, 4=0.77%, 10=5.12%, 20=67.21%, 50=24.64% 00:09:02.729 lat (msec) : 100=1.96% 00:09:02.729 cpu : usr=5.38%, sys=9.56%, ctx=414, majf=0, minf=13 00:09:02.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.729 issued rwts: total=3584,3665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.729 00:09:02.729 Run status group 0 (all jobs): 00:09:02.729 READ: bw=60.8MiB/s (63.8MB/s), 13.9MiB/s-17.8MiB/s (14.6MB/s-18.7MB/s), io=61.4MiB (64.3MB), run=1004-1009msec 00:09:02.729 WRITE: bw=63.7MiB/s (66.8MB/s), 14.2MiB/s-18.3MiB/s (14.9MB/s-19.2MB/s), io=64.3MiB (67.4MB), run=1004-1009msec 00:09:02.729 00:09:02.729 Disk stats (read/write): 00:09:02.729 nvme0n1: ios=3621/4096, merge=0/0, ticks=24609/25670, in_queue=50279, util=100.00% 00:09:02.729 nvme0n2: ios=3098/3200, merge=0/0, ticks=24119/19129, in_queue=43248, util=90.96% 00:09:02.729 nvme0n3: ios=3456/3584, merge=0/0, ticks=14986/13869, in_queue=28855, util=95.00% 00:09:02.729 nvme0n4: ios=3129/3175, merge=0/0, ticks=37646/44549, in_queue=82195, util=95.59% 00:09:02.729 10:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:02.729 10:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1450298 00:09:02.729 10:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:02.729 10:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:02.729 [global] 00:09:02.729 thread=1 00:09:02.729 invalidate=1 00:09:02.729 rw=read 00:09:02.729 time_based=1 00:09:02.729 runtime=10 00:09:02.729 ioengine=libaio 00:09:02.729 direct=1 00:09:02.729 bs=4096 00:09:02.729 iodepth=1 00:09:02.729 norandommap=1 00:09:02.729 numjobs=1 00:09:02.729 00:09:02.729 [job0] 00:09:02.729 filename=/dev/nvme0n1 00:09:02.729 [job1] 00:09:02.729 filename=/dev/nvme0n2 00:09:02.729 [job2] 00:09:02.729 filename=/dev/nvme0n3 00:09:02.729 [job3] 00:09:02.729 filename=/dev/nvme0n4 00:09:02.729 Could not set queue depth (nvme0n1) 00:09:02.729 Could not set queue depth (nvme0n2) 00:09:02.729 Could not set queue depth (nvme0n3) 00:09:02.729 Could not set queue depth (nvme0n4) 00:09:02.729 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.729 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.729 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.729 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.729 fio-3.35 00:09:02.729 Starting 4 threads 00:09:06.012 10:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:06.012 10:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:06.012 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30191616, buflen=4096 00:09:06.012 fio: pid=1450384, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:06.012 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23945216, buflen=4096 00:09:06.012 fio: pid=1450383, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:06.012 10:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.012 10:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:06.270 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14192640, buflen=4096 00:09:06.270 fio: pid=1450380, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:06.270 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.270 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:06.529 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=47448064, buflen=4096 00:09:06.529 fio: pid=1450381, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:06.787 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.787 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:06.787 00:09:06.787 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1450380: Thu Jul 25 10:15:56 2024 00:09:06.787 read: IOPS=981, BW=3925KiB/s (4019kB/s)(13.5MiB/3531msec) 00:09:06.787 slat (usec): min=4, max=21793, avg=23.87, stdev=469.79 00:09:06.787 clat (usec): min=256, max=42055, avg=986.48, stdev=5154.67 00:09:06.787 lat (usec): min=264, max=42072, avg=1010.36, stdev=5176.24 00:09:06.787 clat percentiles (usec): 00:09:06.787 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:09:06.787 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:09:06.787 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 416], 00:09:06.787 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:06.787 | 99.99th=[42206] 00:09:06.787 bw ( KiB/s): min= 96, max= 9032, per=9.72%, avg=2874.67, stdev=3353.70, samples=6 00:09:06.787 iops : min= 24, max= 2258, avg=718.67, stdev=838.42, samples=6 00:09:06.787 lat (usec) : 500=97.66%, 750=0.52%, 1000=0.03% 00:09:06.787 lat (msec) : 2=0.12%, 4=0.03%, 50=1.62% 00:09:06.787 cpu : usr=0.40%, sys=1.05%, ctx=3470, majf=0, minf=1 00:09:06.787 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.787 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.787 issued rwts: total=3466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.787 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.787 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1450381: Thu Jul 25 10:15:56 2024 00:09:06.787 read: IOPS=3030, BW=11.8MiB/s (12.4MB/s)(45.2MiB/3823msec) 00:09:06.788 slat (usec): min=4, max=25929, avg=17.27, stdev=339.35 00:09:06.788 clat (usec): min=224, max=41380, avg=309.22, stdev=385.53 00:09:06.788 lat (usec): min=231, max=41397, avg=326.49, stdev=515.56 00:09:06.788 clat percentiles (usec): 00:09:06.788 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:09:06.788 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 326], 00:09:06.788 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 375], 00:09:06.788 | 99.00th=[ 465], 99.50th=[ 494], 99.90th=[ 644], 99.95th=[ 840], 00:09:06.788 | 99.99th=[ 1844] 00:09:06.788 bw ( KiB/s): min= 9800, max=13520, per=40.73%, avg=12047.86, stdev=1253.47, samples=7 00:09:06.788 iops : min= 2450, max= 3380, avg=3011.86, stdev=313.25, samples=7 00:09:06.788 lat (usec) : 250=14.24%, 500=85.32%, 750=0.37%, 1000=0.02% 00:09:06.788 lat (msec) : 2=0.03%, 50=0.01% 00:09:06.788 cpu : usr=1.81%, sys=5.02%, ctx=11593, majf=0, minf=1 00:09:06.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 issued rwts: total=11585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.788 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1450383: Thu Jul 25 10:15:56 2024 00:09:06.788 read: IOPS=1810, BW=7242KiB/s (7416kB/s)(22.8MiB/3229msec) 00:09:06.788 slat (usec): min=5, max=16452, avg=17.38, stdev=259.58 00:09:06.788 clat (usec): min=252, max=45091, avg=527.71, stdev=2792.50 00:09:06.788 lat (usec): min=259, max=45109, avg=545.09, stdev=2805.12 00:09:06.788 clat percentiles (usec): 00:09:06.788 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:09:06.788 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:09:06.788 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 437], 00:09:06.788 | 99.00th=[ 494], 99.50th=[ 611], 99.90th=[42206], 99.95th=[42206], 00:09:06.788 | 99.99th=[45351] 00:09:06.788 bw ( KiB/s): min= 104, max=12096, per=24.51%, avg=7249.33, stdev=5235.98, samples=6 00:09:06.788 iops : min= 26, max= 3024, avg=1812.33, stdev=1309.00, samples=6 00:09:06.788 lat (usec) : 500=99.04%, 750=0.46% 00:09:06.788 lat (msec) : 20=0.02%, 50=0.46% 00:09:06.788 cpu : usr=1.39%, sys=3.90%, ctx=5849, majf=0, minf=1 00:09:06.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 issued rwts: total=5847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.788 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1450384: Thu Jul 25 10:15:56 2024 00:09:06.788 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(28.8MiB/2932msec) 00:09:06.788 slat (nsec): min=5941, max=61619, avg=12068.12, stdev=5245.63 00:09:06.788 clat (usec): min=256, max=41406, avg=378.75, stdev=1255.61 00:09:06.788 lat (usec): min=262, max=41437, avg=390.82, stdev=1256.04 00:09:06.788 clat percentiles (usec): 00:09:06.788 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 310], 00:09:06.788 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:09:06.788 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 404], 00:09:06.788 | 99.00th=[ 502], 99.50th=[ 578], 99.90th=[ 7963], 99.95th=[41157], 00:09:06.788 | 99.99th=[41157] 00:09:06.788 bw ( KiB/s): min= 6520, max=11824, per=33.04%, avg=9772.80, stdev=2056.35, samples=5 00:09:06.788 iops : min= 1630, max= 2956, avg=2443.20, stdev=514.09, samples=5 00:09:06.788 lat (usec) : 500=98.98%, 750=0.84% 00:09:06.788 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01%, 50=0.09% 00:09:06.788 cpu : usr=2.01%, sys=5.02%, ctx=7372, majf=0, minf=1 00:09:06.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.788 issued rwts: total=7372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.788 00:09:06.788 Run status group 0 (all jobs): 00:09:06.788 READ: bw=28.9MiB/s (30.3MB/s), 3925KiB/s-11.8MiB/s (4019kB/s-12.4MB/s), io=110MiB (116MB), run=2932-3823msec 00:09:06.788 00:09:06.788 Disk stats (read/write): 00:09:06.788 nvme0n1: ios=3053/0, merge=0/0, ticks=3273/0, in_queue=3273, util=95.11% 00:09:06.788 nvme0n2: ios=10870/0, merge=0/0, ticks=3269/0, in_queue=3269, util=94.51% 00:09:06.788 nvme0n3: ios=5517/0, merge=0/0, ticks=2935/0, in_queue=2935, util=95.95% 00:09:06.788 nvme0n4: ios=7192/0, merge=0/0, ticks=2639/0, in_queue=2639, util=96.74% 00:09:07.047 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.047 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:07.305 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.305 10:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:07.563 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.563 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:07.821 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.821 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:08.078 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:08.078 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1450298 00:09:08.078 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:08.078 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:08.336 nvmf hotplug test: fio failed as expected 00:09:08.336 10:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.594 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.595 rmmod nvme_tcp 00:09:08.595 rmmod nvme_fabrics 00:09:08.595 rmmod nvme_keyring 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1448705 ']' 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1448705 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1448705 ']' 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1448705 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1448705 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1448705' 00:09:08.595 killing process with pid 1448705 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1448705 00:09:08.595 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1448705 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.854 10:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.760 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.760 00:09:10.760 real 0m23.828s 00:09:10.760 user 1m23.639s 00:09:10.760 sys 0m7.697s 00:09:10.760 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.760 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.760 ************************************ 00:09:10.760 END TEST nvmf_fio_target 00:09:10.760 ************************************ 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.018 ************************************ 00:09:11.018 START TEST nvmf_bdevio 00:09:11.018 ************************************ 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:11.018 * Looking for test storage... 00:09:11.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.018 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.019 10:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.923 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:12.924 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:12.924 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:12.924 Found net devices under 0000:08:00.0: cvl_0_0 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:12.924 Found net devices under 0000:08:00.1: cvl_0_1 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:09:12.924 00:09:12.924 --- 10.0.0.2 ping statistics --- 00:09:12.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.924 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:12.924 00:09:12.924 --- 10.0.0.1 ping statistics --- 00:09:12.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.924 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.924 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1452416 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1452416 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1452416 ']' 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.925 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.925 [2024-07-25 10:16:02.567017] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:09:12.925 [2024-07-25 10:16:02.567115] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.925 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.925 [2024-07-25 10:16:02.633928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.184 [2024-07-25 10:16:02.755234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.184 [2024-07-25 10:16:02.755298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.184 [2024-07-25 10:16:02.755323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.184 [2024-07-25 10:16:02.755337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.184 [2024-07-25 10:16:02.755349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.184 [2024-07-25 10:16:02.755429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:13.184 [2024-07-25 10:16:02.755478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:13.184 [2024-07-25 10:16:02.755580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.184 [2024-07-25 10:16:02.755546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.184 [2024-07-25 10:16:02.908739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.184 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.185 Malloc0 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.185 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.185 [2024-07-25 10:16:02.959229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.443 { 00:09:13.443 "params": { 00:09:13.443 "name": "Nvme$subsystem", 00:09:13.443 "trtype": "$TEST_TRANSPORT", 00:09:13.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.443 "adrfam": "ipv4", 00:09:13.443 "trsvcid": "$NVMF_PORT", 00:09:13.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.443 "hdgst": ${hdgst:-false}, 00:09:13.443 "ddgst": ${ddgst:-false} 00:09:13.443 }, 00:09:13.443 "method": "bdev_nvme_attach_controller" 00:09:13.443 } 00:09:13.443 EOF 00:09:13.443 )") 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:13.443 10:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.443 "params": { 00:09:13.443 "name": "Nvme1", 00:09:13.443 "trtype": "tcp", 00:09:13.443 "traddr": "10.0.0.2", 00:09:13.443 "adrfam": "ipv4", 00:09:13.443 "trsvcid": "4420", 00:09:13.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.443 "hdgst": false, 00:09:13.443 "ddgst": false 00:09:13.443 }, 00:09:13.443 "method": "bdev_nvme_attach_controller" 00:09:13.443 }' 00:09:13.443 [2024-07-25 10:16:03.010096] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:09:13.443 [2024-07-25 10:16:03.010185] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452525 ] 00:09:13.443 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.443 [2024-07-25 10:16:03.072120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.443 [2024-07-25 10:16:03.192407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.443 [2024-07-25 10:16:03.192458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.443 [2024-07-25 10:16:03.192461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.702 I/O targets: 00:09:13.702 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:13.702 00:09:13.702 00:09:13.702 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.702 http://cunit.sourceforge.net/ 00:09:13.702 00:09:13.702 00:09:13.702 Suite: bdevio tests on: Nvme1n1 00:09:13.702 Test: blockdev write read block ...passed 00:09:13.702 Test: blockdev write zeroes read block ...passed 00:09:13.702 Test: blockdev write zeroes read no split ...passed 00:09:13.702 Test: blockdev write zeroes read split ...passed 00:09:13.960 Test: blockdev write zeroes read split partial ...passed 00:09:13.960 Test: blockdev reset ...[2024-07-25 10:16:03.484789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:13.960 [2024-07-25 10:16:03.484917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff5f60 (9): Bad file descriptor 00:09:13.960 [2024-07-25 10:16:03.500924] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:13.960 passed 00:09:13.960 Test: blockdev write read 8 blocks ...passed 00:09:13.960 Test: blockdev write read size > 128k ...passed 00:09:13.960 Test: blockdev write read invalid size ...passed 00:09:13.960 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.960 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.960 Test: blockdev write read max offset ...passed 00:09:13.960 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.960 Test: blockdev writev readv 8 blocks ...passed 00:09:13.960 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.960 Test: blockdev writev readv block ...passed 00:09:13.960 Test: blockdev writev readv size > 128k ...passed 00:09:13.960 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.960 Test: blockdev comparev and writev ...[2024-07-25 10:16:03.677146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.677185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.677222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.677241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.677599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.677625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.677649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.677665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.678019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.678042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.678059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.678402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.678426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:13.960 [2024-07-25 10:16:03.678450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.960 [2024-07-25 10:16:03.678466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:13.960 passed 00:09:14.218 Test: blockdev nvme passthru rw ...passed 00:09:14.218 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:16:03.762858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.218 [2024-07-25 10:16:03.762888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:14.218 [2024-07-25 10:16:03.763142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.218 [2024-07-25 10:16:03.763167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:14.218 [2024-07-25 10:16:03.763441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.218 [2024-07-25 10:16:03.763465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:14.218 [2024-07-25 10:16:03.763684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.218 [2024-07-25 10:16:03.763709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:14.218 passed 00:09:14.218 Test: blockdev nvme admin passthru ...passed 00:09:14.218 Test: blockdev copy ...passed 00:09:14.218 00:09:14.218 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.218 suites 1 1 n/a 0 0 00:09:14.218 tests 23 23 23 0 0 00:09:14.218 asserts 152 152 152 0 n/a 00:09:14.218 00:09:14.218 Elapsed time = 0.908 seconds 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.476 rmmod nvme_tcp 00:09:14.476 rmmod nvme_fabrics 00:09:14.476 rmmod nvme_keyring 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1452416 ']' 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1452416 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1452416 ']' 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1452416 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1452416 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1452416' 00:09:14.476 killing process with pid 1452416 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1452416 00:09:14.476 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1452416 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.734 10:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:16.642 00:09:16.642 real 0m5.808s 00:09:16.642 user 0m8.607s 00:09:16.642 sys 0m1.847s 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.642 ************************************ 00:09:16.642 END TEST nvmf_bdevio 00:09:16.642 ************************************ 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:16.642 00:09:16.642 real 3m50.499s 00:09:16.642 user 10m9.376s 00:09:16.642 sys 1m4.380s 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.642 10:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.642 ************************************ 00:09:16.642 END TEST nvmf_target_core 00:09:16.642 ************************************ 00:09:16.901 10:16:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.901 10:16:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.901 10:16:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.901 10:16:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.901 ************************************ 00:09:16.901 START TEST nvmf_target_extra 00:09:16.901 ************************************ 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.901 * Looking for test storage... 00:09:16.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.901 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:16.902 ************************************ 00:09:16.902 START TEST nvmf_example 00:09:16.902 ************************************ 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.902 * Looking for test storage... 00:09:16.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:16.902 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.903 10:16:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:18.809 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:18.809 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:18.809 Found net devices under 0000:08:00.0: cvl_0_0 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:18.809 Found net devices under 0000:08:00.1: cvl_0_1 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.809 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:09:18.810 00:09:18.810 --- 10.0.0.2 ping statistics --- 00:09:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.810 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:18.810 00:09:18.810 --- 10.0.0.1 ping statistics --- 00:09:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.810 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1454174 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1454174 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1454174 ']' 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.810 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.810 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:19.069 10:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:19.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.266 Initializing NVMe Controllers 00:09:31.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:31.266 Initialization complete. Launching workers. 00:09:31.266 ======================================================== 00:09:31.266 Latency(us) 00:09:31.266 Device Information : IOPS MiB/s Average min max 00:09:31.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13911.60 54.34 4601.20 1039.06 21755.66 00:09:31.266 ======================================================== 00:09:31.266 Total : 13911.60 54.34 4601.20 1039.06 21755.66 00:09:31.266 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.266 rmmod nvme_tcp 00:09:31.266 rmmod nvme_fabrics 00:09:31.266 rmmod nvme_keyring 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:31.266 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1454174 ']' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1454174 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1454174 ']' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1454174 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1454174 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1454174' 00:09:31.267 killing process with pid 1454174 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1454174 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1454174 00:09:31.267 nvmf threads initialize successfully 00:09:31.267 bdev subsystem init successfully 00:09:31.267 created a nvmf target service 00:09:31.267 create targets's poll groups done 00:09:31.267 all subsystems of target started 00:09:31.267 nvmf target is running 00:09:31.267 all subsystems of target stopped 00:09:31.267 destroy targets's poll groups done 00:09:31.267 destroyed the nvmf target service 00:09:31.267 bdev subsystem finish successfully 00:09:31.267 nvmf threads destroy successfully 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.267 10:16:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 00:09:31.840 real 0m14.781s 00:09:31.840 user 0m41.947s 00:09:31.840 sys 0m2.953s 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 ************************************ 00:09:31.840 END TEST nvmf_example 00:09:31.840 ************************************ 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 ************************************ 00:09:31.840 START TEST nvmf_filesystem 00:09:31.840 ************************************ 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.840 * Looking for test storage... 00:09:31.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:31.840 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:31.841 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:31.841 #define SPDK_CONFIG_H 00:09:31.841 #define SPDK_CONFIG_APPS 1 00:09:31.841 #define SPDK_CONFIG_ARCH native 00:09:31.841 #undef SPDK_CONFIG_ASAN 00:09:31.841 #undef SPDK_CONFIG_AVAHI 00:09:31.841 #undef SPDK_CONFIG_CET 00:09:31.841 #define SPDK_CONFIG_COVERAGE 1 00:09:31.841 #define SPDK_CONFIG_CROSS_PREFIX 00:09:31.841 #undef SPDK_CONFIG_CRYPTO 00:09:31.841 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:31.841 #undef SPDK_CONFIG_CUSTOMOCF 00:09:31.841 #undef SPDK_CONFIG_DAOS 00:09:31.841 #define SPDK_CONFIG_DAOS_DIR 00:09:31.841 #define SPDK_CONFIG_DEBUG 1 00:09:31.841 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:31.841 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.841 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:31.841 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:31.841 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:31.841 #undef SPDK_CONFIG_DPDK_UADK 00:09:31.841 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.841 #define SPDK_CONFIG_EXAMPLES 1 00:09:31.841 #undef SPDK_CONFIG_FC 00:09:31.841 #define SPDK_CONFIG_FC_PATH 00:09:31.841 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:31.841 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:31.841 #undef SPDK_CONFIG_FUSE 00:09:31.841 #undef SPDK_CONFIG_FUZZER 00:09:31.841 #define SPDK_CONFIG_FUZZER_LIB 00:09:31.841 #undef SPDK_CONFIG_GOLANG 00:09:31.841 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:31.841 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:31.841 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:31.841 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:31.841 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:31.841 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:31.841 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:31.841 #define SPDK_CONFIG_IDXD 1 00:09:31.841 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:31.841 #undef SPDK_CONFIG_IPSEC_MB 00:09:31.841 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:31.841 #define SPDK_CONFIG_ISAL 1 00:09:31.841 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:31.842 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:31.842 #define SPDK_CONFIG_LIBDIR 00:09:31.842 #undef SPDK_CONFIG_LTO 00:09:31.842 #define SPDK_CONFIG_MAX_LCORES 128 00:09:31.842 #define SPDK_CONFIG_NVME_CUSE 1 00:09:31.842 #undef SPDK_CONFIG_OCF 00:09:31.842 #define SPDK_CONFIG_OCF_PATH 00:09:31.842 #define SPDK_CONFIG_OPENSSL_PATH 00:09:31.842 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:31.842 #define SPDK_CONFIG_PGO_DIR 00:09:31.842 #undef SPDK_CONFIG_PGO_USE 00:09:31.842 #define SPDK_CONFIG_PREFIX /usr/local 00:09:31.842 #undef SPDK_CONFIG_RAID5F 00:09:31.842 #undef SPDK_CONFIG_RBD 00:09:31.842 #define SPDK_CONFIG_RDMA 1 00:09:31.842 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:31.842 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:31.842 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:31.842 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:31.842 #define SPDK_CONFIG_SHARED 1 00:09:31.842 #undef SPDK_CONFIG_SMA 00:09:31.842 #define SPDK_CONFIG_TESTS 1 00:09:31.842 #undef SPDK_CONFIG_TSAN 00:09:31.842 #define SPDK_CONFIG_UBLK 1 00:09:31.842 #define SPDK_CONFIG_UBSAN 1 00:09:31.842 #undef SPDK_CONFIG_UNIT_TESTS 00:09:31.842 #undef SPDK_CONFIG_URING 00:09:31.842 #define SPDK_CONFIG_URING_PATH 00:09:31.842 #undef SPDK_CONFIG_URING_ZNS 00:09:31.842 #undef SPDK_CONFIG_USDT 00:09:31.842 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:31.842 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:31.842 #define SPDK_CONFIG_VFIO_USER 1 00:09:31.842 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:31.842 #define SPDK_CONFIG_VHOST 1 00:09:31.842 #define SPDK_CONFIG_VIRTIO 1 00:09:31.842 #undef SPDK_CONFIG_VTUNE 00:09:31.842 #define SPDK_CONFIG_VTUNE_DIR 00:09:31.842 #define SPDK_CONFIG_WERROR 1 00:09:31.842 #define SPDK_CONFIG_WPDK_DIR 00:09:31.842 #undef SPDK_CONFIG_XNVME 00:09:31.842 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:31.842 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.843 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j32 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1455402 ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1455402 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.4PszrP 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4PszrP/tests/target /tmp/spdk.4PszrP 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:31.844 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1957711872 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3326717952 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=42782015488 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=53546168320 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10764152832 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=26761826304 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=26773082112 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=10687102976 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=10709233664 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22130688 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=26772238336 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=26773086208 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=847872 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=5354610688 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5354614784 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:31.845 * Looking for test storage... 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=42782015488 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12978745344 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:31.845 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.846 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:33.748 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:33.748 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:33.748 Found net devices under 0000:08:00.0: cvl_0_0 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:33.748 Found net devices under 0000:08:00.1: cvl_0_1 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.748 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:09:33.749 00:09:33.749 --- 10.0.0.2 ping statistics --- 00:09:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.749 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:09:33.749 00:09:33.749 --- 10.0.0.1 ping statistics --- 00:09:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.749 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 ************************************ 00:09:33.749 START TEST nvmf_filesystem_no_in_capsule 00:09:33.749 ************************************ 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1456648 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1456648 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1456648 ']' 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.749 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.749 [2024-07-25 10:16:23.516050] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:09:33.749 [2024-07-25 10:16:23.516147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.005 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.006 [2024-07-25 10:16:23.585640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.006 [2024-07-25 10:16:23.706991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.006 [2024-07-25 10:16:23.707064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.006 [2024-07-25 10:16:23.707080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.006 [2024-07-25 10:16:23.707093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.006 [2024-07-25 10:16:23.707104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.006 [2024-07-25 10:16:23.707202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.006 [2024-07-25 10:16:23.707255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.006 [2024-07-25 10:16:23.707303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.006 [2024-07-25 10:16:23.707306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 [2024-07-25 10:16:23.857822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 Malloc1 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 [2024-07-25 10:16:24.018091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.263 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:34.263 { 00:09:34.263 "name": "Malloc1", 00:09:34.263 "aliases": [ 00:09:34.263 "ed03c561-7d97-4a80-a0c1-29557b63c837" 00:09:34.263 ], 00:09:34.263 "product_name": "Malloc disk", 00:09:34.263 "block_size": 512, 00:09:34.263 "num_blocks": 1048576, 00:09:34.263 "uuid": "ed03c561-7d97-4a80-a0c1-29557b63c837", 00:09:34.263 "assigned_rate_limits": { 00:09:34.263 "rw_ios_per_sec": 0, 00:09:34.263 "rw_mbytes_per_sec": 0, 00:09:34.263 "r_mbytes_per_sec": 0, 00:09:34.263 "w_mbytes_per_sec": 0 00:09:34.263 }, 00:09:34.263 "claimed": true, 00:09:34.263 "claim_type": "exclusive_write", 00:09:34.263 "zoned": false, 00:09:34.263 "supported_io_types": { 00:09:34.263 "read": true, 00:09:34.263 "write": true, 00:09:34.263 "unmap": true, 00:09:34.263 "flush": true, 00:09:34.263 "reset": true, 00:09:34.263 "nvme_admin": false, 00:09:34.263 "nvme_io": false, 00:09:34.263 "nvme_io_md": false, 00:09:34.263 "write_zeroes": true, 00:09:34.263 "zcopy": true, 00:09:34.263 "get_zone_info": false, 00:09:34.263 "zone_management": false, 00:09:34.263 "zone_append": false, 00:09:34.263 "compare": false, 00:09:34.263 "compare_and_write": false, 00:09:34.263 "abort": true, 00:09:34.263 "seek_hole": false, 00:09:34.263 "seek_data": false, 00:09:34.263 "copy": true, 00:09:34.263 "nvme_iov_md": false 00:09:34.263 }, 00:09:34.263 "memory_domains": [ 00:09:34.263 { 00:09:34.263 "dma_device_id": "system", 00:09:34.263 "dma_device_type": 1 00:09:34.263 }, 00:09:34.263 { 00:09:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.263 "dma_device_type": 2 00:09:34.263 } 00:09:34.263 ], 00:09:34.263 "driver_specific": {} 00:09:34.263 } 00:09:34.263 ]' 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:34.521 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.103 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.103 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:35.103 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.103 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:35.103 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:37.054 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:37.312 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:37.628 10:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.009 ************************************ 00:09:39.009 START TEST filesystem_ext4 00:09:39.009 ************************************ 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:39.009 mke2fs 1.46.5 (30-Dec-2021) 00:09:39.009 Discarding device blocks: 0/522240 done 00:09:39.009 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:39.009 Filesystem UUID: c5d56b16-2064-4fd0-8930-0251f1f03d14 00:09:39.009 Superblock backups stored on blocks: 00:09:39.009 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:39.009 00:09:39.009 Allocating group tables: 0/64 done 00:09:39.009 Writing inode tables: 0/64 done 00:09:39.009 Creating journal (8192 blocks): done 00:09:39.009 Writing superblocks and filesystem accounting information: 0/64 done 00:09:39.009 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:39.009 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1456648 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:39.268 00:09:39.268 real 0m0.468s 00:09:39.268 user 0m0.025s 00:09:39.268 sys 0m0.053s 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:39.268 ************************************ 00:09:39.268 END TEST filesystem_ext4 00:09:39.268 ************************************ 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.268 ************************************ 00:09:39.268 START TEST filesystem_btrfs 00:09:39.268 ************************************ 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:39.268 10:16:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:39.526 btrfs-progs v6.6.2 00:09:39.526 See https://btrfs.readthedocs.io for more information. 00:09:39.526 00:09:39.526 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:39.526 NOTE: several default settings have changed in version 5.15, please make sure 00:09:39.526 this does not affect your deployments: 00:09:39.526 - DUP for metadata (-m dup) 00:09:39.526 - enabled no-holes (-O no-holes) 00:09:39.526 - enabled free-space-tree (-R free-space-tree) 00:09:39.526 00:09:39.526 Label: (null) 00:09:39.526 UUID: 25d41f85-9a55-43ce-91a3-6c5533cb3a2a 00:09:39.526 Node size: 16384 00:09:39.526 Sector size: 4096 00:09:39.526 Filesystem size: 510.00MiB 00:09:39.526 Block group profiles: 00:09:39.526 Data: single 8.00MiB 00:09:39.526 Metadata: DUP 32.00MiB 00:09:39.526 System: DUP 8.00MiB 00:09:39.526 SSD detected: yes 00:09:39.526 Zoned device: no 00:09:39.526 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:39.526 Runtime features: free-space-tree 00:09:39.526 Checksum: crc32c 00:09:39.526 Number of devices: 1 00:09:39.526 Devices: 00:09:39.526 ID SIZE PATH 00:09:39.526 1 510.00MiB /dev/nvme0n1p1 00:09:39.526 00:09:39.526 10:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:39.526 10:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.464 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.464 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:40.464 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.464 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:40.464 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1456648 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.465 00:09:40.465 real 0m1.241s 00:09:40.465 user 0m0.020s 00:09:40.465 sys 0m0.116s 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:40.465 ************************************ 00:09:40.465 END TEST filesystem_btrfs 00:09:40.465 ************************************ 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.465 ************************************ 00:09:40.465 START TEST filesystem_xfs 00:09:40.465 ************************************ 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:40.465 10:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:40.722 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:40.722 = sectsz=512 attr=2, projid32bit=1 00:09:40.722 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:40.722 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:40.722 data = bsize=4096 blocks=130560, imaxpct=25 00:09:40.722 = sunit=0 swidth=0 blks 00:09:40.722 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:40.722 log =internal log bsize=4096 blocks=16384, version=2 00:09:40.722 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:40.722 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:41.665 Discarding blocks...Done. 00:09:41.665 10:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:41.665 10:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:44.210 00:09:44.210 real 0m3.274s 00:09:44.210 user 0m0.010s 00:09:44.210 sys 0m0.068s 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:44.210 ************************************ 00:09:44.210 END TEST filesystem_xfs 00:09:44.210 ************************************ 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1456648 ']' 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1456648' 00:09:44.210 killing process with pid 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1456648 00:09:44.210 10:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1456648 00:09:44.470 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:44.470 00:09:44.470 real 0m10.780s 00:09:44.470 user 0m41.001s 00:09:44.470 sys 0m1.762s 00:09:44.470 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.470 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 ************************************ 00:09:44.470 END TEST nvmf_filesystem_no_in_capsule 00:09:44.470 ************************************ 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 ************************************ 00:09:44.731 START TEST nvmf_filesystem_in_capsule 00:09:44.731 ************************************ 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1457873 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1457873 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1457873 ']' 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.731 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 [2024-07-25 10:16:34.348788] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:09:44.731 [2024-07-25 10:16:34.348886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.731 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.731 [2024-07-25 10:16:34.417297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.992 [2024-07-25 10:16:34.537349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.992 [2024-07-25 10:16:34.537417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.992 [2024-07-25 10:16:34.537433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.992 [2024-07-25 10:16:34.537447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.992 [2024-07-25 10:16:34.537458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.992 [2024-07-25 10:16:34.537554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.992 [2024-07-25 10:16:34.537636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.992 [2024-07-25 10:16:34.537697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.992 [2024-07-25 10:16:34.537690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.992 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.993 [2024-07-25 10:16:34.684801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.993 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.254 Malloc1 00:09:45.254 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.254 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.255 [2024-07-25 10:16:34.845843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:45.255 { 00:09:45.255 "name": "Malloc1", 00:09:45.255 "aliases": [ 00:09:45.255 "f0ecd817-e9e7-499f-93a6-4eddbb842ab5" 00:09:45.255 ], 00:09:45.255 "product_name": "Malloc disk", 00:09:45.255 "block_size": 512, 00:09:45.255 "num_blocks": 1048576, 00:09:45.255 "uuid": "f0ecd817-e9e7-499f-93a6-4eddbb842ab5", 00:09:45.255 "assigned_rate_limits": { 00:09:45.255 "rw_ios_per_sec": 0, 00:09:45.255 "rw_mbytes_per_sec": 0, 00:09:45.255 "r_mbytes_per_sec": 0, 00:09:45.255 "w_mbytes_per_sec": 0 00:09:45.255 }, 00:09:45.255 "claimed": true, 00:09:45.255 "claim_type": "exclusive_write", 00:09:45.255 "zoned": false, 00:09:45.255 "supported_io_types": { 00:09:45.255 "read": true, 00:09:45.255 "write": true, 00:09:45.255 "unmap": true, 00:09:45.255 "flush": true, 00:09:45.255 "reset": true, 00:09:45.255 "nvme_admin": false, 00:09:45.255 "nvme_io": false, 00:09:45.255 "nvme_io_md": false, 00:09:45.255 "write_zeroes": true, 00:09:45.255 "zcopy": true, 00:09:45.255 "get_zone_info": false, 00:09:45.255 "zone_management": false, 00:09:45.255 "zone_append": false, 00:09:45.255 "compare": false, 00:09:45.255 "compare_and_write": false, 00:09:45.255 "abort": true, 00:09:45.255 "seek_hole": false, 00:09:45.255 "seek_data": false, 00:09:45.255 "copy": true, 00:09:45.255 "nvme_iov_md": false 00:09:45.255 }, 00:09:45.255 "memory_domains": [ 00:09:45.255 { 00:09:45.255 "dma_device_id": "system", 00:09:45.255 "dma_device_type": 1 00:09:45.255 }, 00:09:45.255 { 00:09:45.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.255 "dma_device_type": 2 00:09:45.255 } 00:09:45.255 ], 00:09:45.255 "driver_specific": {} 00:09:45.255 } 00:09:45.255 ]' 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:45.255 10:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.825 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.825 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:45.825 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.825 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:45.825 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:47.735 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:47.735 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:47.735 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:47.736 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:47.994 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:48.562 10:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.956 ************************************ 00:09:49.956 START TEST filesystem_in_capsule_ext4 00:09:49.956 ************************************ 00:09:49.956 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:49.957 10:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:49.957 mke2fs 1.46.5 (30-Dec-2021) 00:09:49.957 Discarding device blocks: 0/522240 done 00:09:49.957 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:49.957 Filesystem UUID: 6ffdfd97-3529-4186-9da7-1ea1184236c4 00:09:49.957 Superblock backups stored on blocks: 00:09:49.957 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:49.957 00:09:49.957 Allocating group tables: 0/64 done 00:09:49.957 Writing inode tables: 0/64 done 00:09:51.339 Creating journal (8192 blocks): done 00:09:51.339 Writing superblocks and filesystem accounting information: 0/64 done 00:09:51.339 00:09:51.339 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:51.339 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1457873 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:51.598 00:09:51.598 real 0m2.022s 00:09:51.598 user 0m0.013s 00:09:51.598 sys 0m0.064s 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:51.598 ************************************ 00:09:51.598 END TEST filesystem_in_capsule_ext4 00:09:51.598 ************************************ 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.598 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.858 ************************************ 00:09:51.858 START TEST filesystem_in_capsule_btrfs 00:09:51.858 ************************************ 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:51.858 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:51.858 btrfs-progs v6.6.2 00:09:51.858 See https://btrfs.readthedocs.io for more information. 00:09:51.858 00:09:51.858 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:51.858 NOTE: several default settings have changed in version 5.15, please make sure 00:09:51.859 this does not affect your deployments: 00:09:51.859 - DUP for metadata (-m dup) 00:09:51.859 - enabled no-holes (-O no-holes) 00:09:51.859 - enabled free-space-tree (-R free-space-tree) 00:09:51.859 00:09:51.859 Label: (null) 00:09:51.859 UUID: 35419571-562a-43e8-a5ad-2cde28aab78e 00:09:51.859 Node size: 16384 00:09:51.859 Sector size: 4096 00:09:51.859 Filesystem size: 510.00MiB 00:09:51.859 Block group profiles: 00:09:51.859 Data: single 8.00MiB 00:09:51.859 Metadata: DUP 32.00MiB 00:09:51.859 System: DUP 8.00MiB 00:09:51.859 SSD detected: yes 00:09:51.859 Zoned device: no 00:09:51.859 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:51.859 Runtime features: free-space-tree 00:09:51.859 Checksum: crc32c 00:09:51.859 Number of devices: 1 00:09:51.859 Devices: 00:09:51.859 ID SIZE PATH 00:09:51.859 1 510.00MiB /dev/nvme0n1p1 00:09:51.859 00:09:51.859 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:51.859 10:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1457873 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:52.428 00:09:52.428 real 0m0.659s 00:09:52.428 user 0m0.017s 00:09:52.428 sys 0m0.118s 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:52.428 ************************************ 00:09:52.428 END TEST filesystem_in_capsule_btrfs 00:09:52.428 ************************************ 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.428 ************************************ 00:09:52.428 START TEST filesystem_in_capsule_xfs 00:09:52.428 ************************************ 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:52.428 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:52.428 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:52.428 = sectsz=512 attr=2, projid32bit=1 00:09:52.428 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:52.428 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:52.428 data = bsize=4096 blocks=130560, imaxpct=25 00:09:52.428 = sunit=0 swidth=0 blks 00:09:52.428 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:52.428 log =internal log bsize=4096 blocks=16384, version=2 00:09:52.428 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:52.428 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:53.367 Discarding blocks...Done. 00:09:53.367 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:53.367 10:16:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:55.904 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:55.904 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:55.904 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1457873 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:55.905 00:09:55.905 real 0m3.343s 00:09:55.905 user 0m0.020s 00:09:55.905 sys 0m0.060s 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 ************************************ 00:09:55.905 END TEST filesystem_in_capsule_xfs 00:09:55.905 ************************************ 00:09:55.905 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.165 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1457873 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1457873 ']' 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1457873 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457873 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457873' 00:09:56.166 killing process with pid 1457873 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1457873 00:09:56.166 10:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1457873 00:09:56.426 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:56.426 00:09:56.426 real 0m11.903s 00:09:56.426 user 0m45.423s 00:09:56.426 sys 0m1.840s 00:09:56.426 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.426 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.426 ************************************ 00:09:56.426 END TEST nvmf_filesystem_in_capsule 00:09:56.426 ************************************ 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.685 rmmod nvme_tcp 00:09:56.685 rmmod nvme_fabrics 00:09:56.685 rmmod nvme_keyring 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.685 10:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.597 00:09:58.597 real 0m26.927s 00:09:58.597 user 1m27.218s 00:09:58.597 sys 0m5.044s 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 ************************************ 00:09:58.597 END TEST nvmf_filesystem 00:09:58.597 ************************************ 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.597 10:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:58.857 ************************************ 00:09:58.857 START TEST nvmf_target_discovery 00:09:58.857 ************************************ 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:58.857 * Looking for test storage... 00:09:58.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.857 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:00.769 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.769 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:00.769 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:00.770 Found net devices under 0000:08:00.0: cvl_0_0 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:00.770 Found net devices under 0000:08:00.1: cvl_0_1 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:10:00.770 00:10:00.770 --- 10.0.0.2 ping statistics --- 00:10:00.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.770 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:10:00.770 00:10:00.770 --- 10.0.0.1 ping statistics --- 00:10:00.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.770 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1460582 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1460582 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1460582 ']' 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.770 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 [2024-07-25 10:16:50.327787] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:00.770 [2024-07-25 10:16:50.327879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.770 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.770 [2024-07-25 10:16:50.394291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.770 [2024-07-25 10:16:50.514197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.770 [2024-07-25 10:16:50.514259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.770 [2024-07-25 10:16:50.514275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.770 [2024-07-25 10:16:50.514288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.770 [2024-07-25 10:16:50.514299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.770 [2024-07-25 10:16:50.516506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.770 [2024-07-25 10:16:50.516628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.770 [2024-07-25 10:16:50.520547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.770 [2024-07-25 10:16:50.520552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 [2024-07-25 10:16:50.667775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 Null1 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 [2024-07-25 10:16:50.708063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 Null2 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 Null3 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.032 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 Null4 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:10:01.292 00:10:01.292 Discovery Log Number of Records 6, Generation counter 6 00:10:01.292 =====Discovery Log Entry 0====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: current discovery subsystem 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4420 00:10:01.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: explicit discovery connections, duplicate discovery information 00:10:01.292 sectype: none 00:10:01.292 =====Discovery Log Entry 1====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: nvme subsystem 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4420 00:10:01.292 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: none 00:10:01.292 sectype: none 00:10:01.292 =====Discovery Log Entry 2====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: nvme subsystem 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4420 00:10:01.292 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: none 00:10:01.292 sectype: none 00:10:01.292 =====Discovery Log Entry 3====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: nvme subsystem 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4420 00:10:01.292 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: none 00:10:01.292 sectype: none 00:10:01.292 =====Discovery Log Entry 4====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: nvme subsystem 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4420 00:10:01.292 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: none 00:10:01.292 sectype: none 00:10:01.292 =====Discovery Log Entry 5====== 00:10:01.292 trtype: tcp 00:10:01.292 adrfam: ipv4 00:10:01.292 subtype: discovery subsystem referral 00:10:01.292 treq: not required 00:10:01.292 portid: 0 00:10:01.292 trsvcid: 4430 00:10:01.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:01.292 traddr: 10.0.0.2 00:10:01.292 eflags: none 00:10:01.292 sectype: none 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:01.292 Perform nvmf subsystem discovery via RPC 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.292 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 [ 00:10:01.292 { 00:10:01.292 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:01.292 "subtype": "Discovery", 00:10:01.292 "listen_addresses": [ 00:10:01.292 { 00:10:01.292 "trtype": "TCP", 00:10:01.292 "adrfam": "IPv4", 00:10:01.292 "traddr": "10.0.0.2", 00:10:01.292 "trsvcid": "4420" 00:10:01.292 } 00:10:01.292 ], 00:10:01.292 "allow_any_host": true, 00:10:01.292 "hosts": [] 00:10:01.292 }, 00:10:01.292 { 00:10:01.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.292 "subtype": "NVMe", 00:10:01.292 "listen_addresses": [ 00:10:01.292 { 00:10:01.292 "trtype": "TCP", 00:10:01.292 "adrfam": "IPv4", 00:10:01.292 "traddr": "10.0.0.2", 00:10:01.292 "trsvcid": "4420" 00:10:01.292 } 00:10:01.293 ], 00:10:01.293 "allow_any_host": true, 00:10:01.293 "hosts": [], 00:10:01.293 "serial_number": "SPDK00000000000001", 00:10:01.293 "model_number": "SPDK bdev Controller", 00:10:01.293 "max_namespaces": 32, 00:10:01.293 "min_cntlid": 1, 00:10:01.293 "max_cntlid": 65519, 00:10:01.293 "namespaces": [ 00:10:01.293 { 00:10:01.293 "nsid": 1, 00:10:01.293 "bdev_name": "Null1", 00:10:01.293 "name": "Null1", 00:10:01.293 "nguid": "66E5F6C42B6243B1AA5B5C041AD3EF87", 00:10:01.293 "uuid": "66e5f6c4-2b62-43b1-aa5b-5c041ad3ef87" 00:10:01.293 } 00:10:01.293 ] 00:10:01.293 }, 00:10:01.293 { 00:10:01.293 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:01.293 "subtype": "NVMe", 00:10:01.293 "listen_addresses": [ 00:10:01.293 { 00:10:01.293 "trtype": "TCP", 00:10:01.293 "adrfam": "IPv4", 00:10:01.293 "traddr": "10.0.0.2", 00:10:01.293 "trsvcid": "4420" 00:10:01.293 } 00:10:01.293 ], 00:10:01.293 "allow_any_host": true, 00:10:01.293 "hosts": [], 00:10:01.293 "serial_number": "SPDK00000000000002", 00:10:01.293 "model_number": "SPDK bdev Controller", 00:10:01.293 "max_namespaces": 32, 00:10:01.293 "min_cntlid": 1, 00:10:01.293 "max_cntlid": 65519, 00:10:01.293 "namespaces": [ 00:10:01.293 { 00:10:01.293 "nsid": 1, 00:10:01.293 "bdev_name": "Null2", 00:10:01.293 "name": "Null2", 00:10:01.293 "nguid": "ECE0787FEC044EE089DDAE4319E92B0F", 00:10:01.293 "uuid": "ece0787f-ec04-4ee0-89dd-ae4319e92b0f" 00:10:01.293 } 00:10:01.293 ] 00:10:01.293 }, 00:10:01.293 { 00:10:01.293 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:01.293 "subtype": "NVMe", 00:10:01.293 "listen_addresses": [ 00:10:01.293 { 00:10:01.293 "trtype": "TCP", 00:10:01.293 "adrfam": "IPv4", 00:10:01.293 "traddr": "10.0.0.2", 00:10:01.293 "trsvcid": "4420" 00:10:01.293 } 00:10:01.293 ], 00:10:01.293 "allow_any_host": true, 00:10:01.293 "hosts": [], 00:10:01.293 "serial_number": "SPDK00000000000003", 00:10:01.293 "model_number": "SPDK bdev Controller", 00:10:01.293 "max_namespaces": 32, 00:10:01.293 "min_cntlid": 1, 00:10:01.293 "max_cntlid": 65519, 00:10:01.293 "namespaces": [ 00:10:01.293 { 00:10:01.293 "nsid": 1, 00:10:01.293 "bdev_name": "Null3", 00:10:01.293 "name": "Null3", 00:10:01.293 "nguid": "42D470E6B8524095B08D934600094691", 00:10:01.293 "uuid": "42d470e6-b852-4095-b08d-934600094691" 00:10:01.293 } 00:10:01.293 ] 00:10:01.293 }, 00:10:01.293 { 00:10:01.293 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:01.293 "subtype": "NVMe", 00:10:01.293 "listen_addresses": [ 00:10:01.293 { 00:10:01.293 "trtype": "TCP", 00:10:01.293 "adrfam": "IPv4", 00:10:01.293 "traddr": "10.0.0.2", 00:10:01.293 "trsvcid": "4420" 00:10:01.293 } 00:10:01.293 ], 00:10:01.293 "allow_any_host": true, 00:10:01.293 "hosts": [], 00:10:01.293 "serial_number": "SPDK00000000000004", 00:10:01.293 "model_number": "SPDK bdev Controller", 00:10:01.293 "max_namespaces": 32, 00:10:01.293 "min_cntlid": 1, 00:10:01.293 "max_cntlid": 65519, 00:10:01.293 "namespaces": [ 00:10:01.293 { 00:10:01.293 "nsid": 1, 00:10:01.293 "bdev_name": "Null4", 00:10:01.293 "name": "Null4", 00:10:01.293 "nguid": "7133D289AA0948229A44F652ED90CAF6", 00:10:01.293 "uuid": "7133d289-aa09-4822-9a44-f652ed90caf6" 00:10:01.293 } 00:10:01.293 ] 00:10:01.293 } 00:10:01.293 ] 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.552 rmmod nvme_tcp 00:10:01.552 rmmod nvme_fabrics 00:10:01.552 rmmod nvme_keyring 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1460582 ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1460582 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1460582 ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1460582 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460582 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460582' 00:10:01.552 killing process with pid 1460582 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1460582 00:10:01.552 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1460582 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.810 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.722 00:10:03.722 real 0m5.050s 00:10:03.722 user 0m4.092s 00:10:03.722 sys 0m1.605s 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:03.722 ************************************ 00:10:03.722 END TEST nvmf_target_discovery 00:10:03.722 ************************************ 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:03.722 ************************************ 00:10:03.722 START TEST nvmf_referrals 00:10:03.722 ************************************ 00:10:03.722 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:03.982 * Looking for test storage... 00:10:03.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.982 10:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:05.889 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:05.889 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:05.889 Found net devices under 0000:08:00.0: cvl_0_0 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.889 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:05.890 Found net devices under 0000:08:00.1: cvl_0_1 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:05.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:10:05.890 00:10:05.890 --- 10.0.0.2 ping statistics --- 00:10:05.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.890 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:05.890 00:10:05.890 --- 10.0.0.1 ping statistics --- 00:10:05.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.890 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1462203 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1462203 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1462203 ']' 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.890 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:05.890 [2024-07-25 10:16:55.385208] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:05.890 [2024-07-25 10:16:55.385308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.890 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.890 [2024-07-25 10:16:55.450334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.890 [2024-07-25 10:16:55.567657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.890 [2024-07-25 10:16:55.567720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.890 [2024-07-25 10:16:55.567736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.890 [2024-07-25 10:16:55.567749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.890 [2024-07-25 10:16:55.567760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.890 [2024-07-25 10:16:55.567872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.890 [2024-07-25 10:16:55.567946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.890 [2024-07-25 10:16:55.567995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.890 [2024-07-25 10:16:55.567998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 [2024-07-25 10:16:55.712752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 [2024-07-25 10:16:55.724982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:06.151 10:16:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:06.410 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.669 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.929 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.188 10:16:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.449 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.709 rmmod nvme_tcp 00:10:07.709 rmmod nvme_fabrics 00:10:07.709 rmmod nvme_keyring 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1462203 ']' 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1462203 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1462203 ']' 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1462203 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462203 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462203' 00:10:07.709 killing process with pid 1462203 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1462203 00:10:07.709 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1462203 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.968 10:16:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.881 00:10:09.881 real 0m6.117s 00:10:09.881 user 0m9.198s 00:10:09.881 sys 0m1.825s 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:09.881 ************************************ 00:10:09.881 END TEST nvmf_referrals 00:10:09.881 ************************************ 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:09.881 ************************************ 00:10:09.881 START TEST nvmf_connect_disconnect 00:10:09.881 ************************************ 00:10:09.881 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:10.141 * Looking for test storage... 00:10:10.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:10.141 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:10.142 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:12.051 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:12.051 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:12.051 Found net devices under 0000:08:00.0: cvl_0_0 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:12.051 Found net devices under 0000:08:00.1: cvl_0_1 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.051 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:10:12.052 00:10:12.052 --- 10.0.0.2 ping statistics --- 00:10:12.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.052 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:10:12.052 00:10:12.052 --- 10.0.0.1 ping statistics --- 00:10:12.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.052 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1463926 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1463926 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1463926 ']' 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.052 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.052 [2024-07-25 10:17:01.575177] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:12.052 [2024-07-25 10:17:01.575274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.052 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.052 [2024-07-25 10:17:01.642075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.052 [2024-07-25 10:17:01.759360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.052 [2024-07-25 10:17:01.759416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.052 [2024-07-25 10:17:01.759432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.052 [2024-07-25 10:17:01.759445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.052 [2024-07-25 10:17:01.759458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.052 [2024-07-25 10:17:01.759570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.052 [2024-07-25 10:17:01.759682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.052 [2024-07-25 10:17:01.759765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.052 [2024-07-25 10:17:01.759769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 [2024-07-25 10:17:01.903791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:12.312 [2024-07-25 10:17:01.954168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:12.312 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:14.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.085 rmmod nvme_tcp 00:10:25.085 rmmod nvme_fabrics 00:10:25.085 rmmod nvme_keyring 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1463926 ']' 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1463926 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1463926 ']' 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1463926 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463926 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463926' 00:10:25.085 killing process with pid 1463926 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1463926 00:10:25.085 10:17:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1463926 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.344 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.883 00:10:27.883 real 0m17.474s 00:10:27.883 user 0m52.532s 00:10:27.883 sys 0m3.015s 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.883 ************************************ 00:10:27.883 END TEST nvmf_connect_disconnect 00:10:27.883 ************************************ 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:27.883 ************************************ 00:10:27.883 START TEST nvmf_multitarget 00:10:27.883 ************************************ 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:27.883 * Looking for test storage... 00:10:27.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:27.883 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.884 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:29.264 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:29.264 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:29.264 Found net devices under 0000:08:00.0: cvl_0_0 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:29.264 Found net devices under 0000:08:00.1: cvl_0_1 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:29.264 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:29.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:10:29.265 00:10:29.265 --- 10.0.0.2 ping statistics --- 00:10:29.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.265 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:10:29.265 00:10:29.265 --- 10.0.0.1 ping statistics --- 00:10:29.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.265 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1466708 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1466708 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1466708 ']' 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.265 10:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:29.265 [2024-07-25 10:17:19.019618] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:29.265 [2024-07-25 10:17:19.019716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.523 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.523 [2024-07-25 10:17:19.090266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.523 [2024-07-25 10:17:19.211052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.523 [2024-07-25 10:17:19.211121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.523 [2024-07-25 10:17:19.211136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.523 [2024-07-25 10:17:19.211149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.523 [2024-07-25 10:17:19.211161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.523 [2024-07-25 10:17:19.211218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.523 [2024-07-25 10:17:19.211270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.523 [2024-07-25 10:17:19.211322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.523 [2024-07-25 10:17:19.211326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:29.782 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:30.040 "nvmf_tgt_1" 00:10:30.040 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:30.040 "nvmf_tgt_2" 00:10:30.040 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:30.040 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:30.299 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:30.299 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:30.299 true 00:10:30.299 10:17:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:30.557 true 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.557 rmmod nvme_tcp 00:10:30.557 rmmod nvme_fabrics 00:10:30.557 rmmod nvme_keyring 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1466708 ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1466708 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1466708 ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1466708 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1466708 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1466708' 00:10:30.557 killing process with pid 1466708 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1466708 00:10:30.557 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1466708 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.816 10:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.350 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.351 00:10:33.351 real 0m5.407s 00:10:33.351 user 0m6.677s 00:10:33.351 sys 0m1.622s 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.351 ************************************ 00:10:33.351 END TEST nvmf_multitarget 00:10:33.351 ************************************ 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.351 ************************************ 00:10:33.351 START TEST nvmf_rpc 00:10:33.351 ************************************ 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:33.351 * Looking for test storage... 00:10:33.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.351 10:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:34.729 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:34.729 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.729 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:34.730 Found net devices under 0000:08:00.0: cvl_0_0 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:34.730 Found net devices under 0000:08:00.1: cvl_0_1 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:10:34.730 00:10:34.730 --- 10.0.0.2 ping statistics --- 00:10:34.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.730 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:10:34.730 00:10:34.730 --- 10.0.0.1 ping statistics --- 00:10:34.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.730 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1468338 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1468338 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1468338 ']' 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.730 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.730 [2024-07-25 10:17:24.424931] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:34.730 [2024-07-25 10:17:24.425026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.730 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.730 [2024-07-25 10:17:24.494653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.989 [2024-07-25 10:17:24.615241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.989 [2024-07-25 10:17:24.615308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.989 [2024-07-25 10:17:24.615324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.989 [2024-07-25 10:17:24.615337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.989 [2024-07-25 10:17:24.615349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.989 [2024-07-25 10:17:24.615429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.989 [2024-07-25 10:17:24.615458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.989 [2024-07-25 10:17:24.615859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.989 [2024-07-25 10:17:24.615890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.989 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:35.250 "tick_rate": 2700000000, 00:10:35.250 "poll_groups": [ 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_000", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_001", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_002", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_003", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [] 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 [2024-07-25 10:17:24.863071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:35.250 "tick_rate": 2700000000, 00:10:35.250 "poll_groups": [ 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_000", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [ 00:10:35.250 { 00:10:35.250 "trtype": "TCP" 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_001", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [ 00:10:35.250 { 00:10:35.250 "trtype": "TCP" 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_002", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [ 00:10:35.250 { 00:10:35.250 "trtype": "TCP" 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }, 00:10:35.250 { 00:10:35.250 "name": "nvmf_tgt_poll_group_003", 00:10:35.250 "admin_qpairs": 0, 00:10:35.250 "io_qpairs": 0, 00:10:35.250 "current_admin_qpairs": 0, 00:10:35.250 "current_io_qpairs": 0, 00:10:35.250 "pending_bdev_io": 0, 00:10:35.250 "completed_nvme_io": 0, 00:10:35.250 "transports": [ 00:10:35.250 { 00:10:35.250 "trtype": "TCP" 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:35.250 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.251 Malloc1 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.251 10:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.251 [2024-07-25 10:17:25.021857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:35.251 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:35.509 [2024-07-25 10:17:25.044315] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:35.509 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:35.509 could not add new controller: failed to write to nvme-fabrics device 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.509 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.768 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:35.768 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:35.768 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.768 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:35.768 10:17:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.310 [2024-07-25 10:17:27.611975] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:38.310 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:38.310 could not add new controller: failed to write to nvme-fabrics device 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.311 10:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.570 10:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.571 10:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:38.571 10:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.571 10:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:38.571 10:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:40.480 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.739 [2024-07-25 10:17:30.267944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.739 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:40.740 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.740 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.740 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.740 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.000 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.000 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.000 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.000 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.000 10:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 [2024-07-25 10:17:32.837959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.541 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.542 10:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.800 10:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.800 10:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.800 10:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.800 10:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.800 10:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.709 [2024-07-25 10:17:35.469135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.709 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.969 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.969 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.228 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.228 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.228 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.228 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:46.228 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:48.767 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 [2024-07-25 10:17:38.065652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:48.767 10:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.334 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 [2024-07-25 10:17:40.616113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.335 10:17:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.335 10:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.335 10:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.335 10:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.335 10:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:51.335 10:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 [2024-07-25 10:17:43.162626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 [2024-07-25 10:17:43.210702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.866 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 [2024-07-25 10:17:43.258871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 [2024-07-25 10:17:43.307052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 [2024-07-25 10:17:43.355228] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.867 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:53.867 "tick_rate": 2700000000, 00:10:53.867 "poll_groups": [ 00:10:53.867 { 00:10:53.867 "name": "nvmf_tgt_poll_group_000", 00:10:53.867 "admin_qpairs": 2, 00:10:53.867 "io_qpairs": 56, 00:10:53.867 "current_admin_qpairs": 0, 00:10:53.867 "current_io_qpairs": 0, 00:10:53.867 "pending_bdev_io": 0, 00:10:53.867 "completed_nvme_io": 109, 00:10:53.867 "transports": [ 00:10:53.867 { 00:10:53.867 "trtype": "TCP" 00:10:53.867 } 00:10:53.867 ] 00:10:53.867 }, 00:10:53.867 { 00:10:53.867 "name": "nvmf_tgt_poll_group_001", 00:10:53.867 "admin_qpairs": 2, 00:10:53.867 "io_qpairs": 56, 00:10:53.867 "current_admin_qpairs": 0, 00:10:53.867 "current_io_qpairs": 0, 00:10:53.867 "pending_bdev_io": 0, 00:10:53.867 "completed_nvme_io": 186, 00:10:53.867 "transports": [ 00:10:53.867 { 00:10:53.867 "trtype": "TCP" 00:10:53.867 } 00:10:53.867 ] 00:10:53.867 }, 00:10:53.867 { 00:10:53.867 "name": "nvmf_tgt_poll_group_002", 00:10:53.867 "admin_qpairs": 1, 00:10:53.867 "io_qpairs": 56, 00:10:53.867 "current_admin_qpairs": 0, 00:10:53.867 "current_io_qpairs": 0, 00:10:53.867 "pending_bdev_io": 0, 00:10:53.867 "completed_nvme_io": 201, 00:10:53.867 "transports": [ 00:10:53.867 { 00:10:53.867 "trtype": "TCP" 00:10:53.867 } 00:10:53.867 ] 00:10:53.867 }, 00:10:53.867 { 00:10:53.868 "name": "nvmf_tgt_poll_group_003", 00:10:53.868 "admin_qpairs": 2, 00:10:53.868 "io_qpairs": 56, 00:10:53.868 "current_admin_qpairs": 0, 00:10:53.868 "current_io_qpairs": 0, 00:10:53.868 "pending_bdev_io": 0, 00:10:53.868 "completed_nvme_io": 78, 00:10:53.868 "transports": [ 00:10:53.868 { 00:10:53.868 "trtype": "TCP" 00:10:53.868 } 00:10:53.868 ] 00:10:53.868 } 00:10:53.868 ] 00:10:53.868 }' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.868 rmmod nvme_tcp 00:10:53.868 rmmod nvme_fabrics 00:10:53.868 rmmod nvme_keyring 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1468338 ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1468338 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1468338 ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1468338 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468338 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468338' 00:10:53.868 killing process with pid 1468338 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1468338 00:10:53.868 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1468338 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.127 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.669 00:10:56.669 real 0m23.224s 00:10:56.669 user 1m15.699s 00:10:56.669 sys 0m3.681s 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.669 ************************************ 00:10:56.669 END TEST nvmf_rpc 00:10:56.669 ************************************ 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.669 ************************************ 00:10:56.669 START TEST nvmf_invalid 00:10:56.669 ************************************ 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:56.669 * Looking for test storage... 00:10:56.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.669 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.670 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.670 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.670 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.670 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.670 10:17:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:58.049 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:58.049 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:58.049 Found net devices under 0000:08:00.0: cvl_0_0 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:58.049 Found net devices under 0000:08:00.1: cvl_0_1 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.049 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:10:58.050 00:10:58.050 --- 10.0.0.2 ping statistics --- 00:10:58.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.050 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:58.050 00:10:58.050 --- 10.0.0.1 ping statistics --- 00:10:58.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.050 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.050 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1471704 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1471704 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1471704 ']' 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.308 10:17:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:58.308 [2024-07-25 10:17:47.892779] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:10:58.308 [2024-07-25 10:17:47.892871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.308 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.308 [2024-07-25 10:17:47.973922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.566 [2024-07-25 10:17:48.128762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.566 [2024-07-25 10:17:48.128834] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.566 [2024-07-25 10:17:48.128865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.566 [2024-07-25 10:17:48.128890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.566 [2024-07-25 10:17:48.128911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.566 [2024-07-25 10:17:48.129011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.566 [2024-07-25 10:17:48.129293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.566 [2024-07-25 10:17:48.129352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.566 [2024-07-25 10:17:48.129362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:58.566 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5454 00:10:58.824 [2024-07-25 10:17:48.564471] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:58.824 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:58.824 { 00:10:58.824 "nqn": "nqn.2016-06.io.spdk:cnode5454", 00:10:58.824 "tgt_name": "foobar", 00:10:58.824 "method": "nvmf_create_subsystem", 00:10:58.824 "req_id": 1 00:10:58.824 } 00:10:58.824 Got JSON-RPC error response 00:10:58.824 response: 00:10:58.824 { 00:10:58.824 "code": -32603, 00:10:58.824 "message": "Unable to find target foobar" 00:10:58.824 }' 00:10:58.824 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:58.824 { 00:10:58.824 "nqn": "nqn.2016-06.io.spdk:cnode5454", 00:10:58.824 "tgt_name": "foobar", 00:10:58.824 "method": "nvmf_create_subsystem", 00:10:58.824 "req_id": 1 00:10:58.824 } 00:10:58.824 Got JSON-RPC error response 00:10:58.824 response: 00:10:58.824 { 00:10:58.824 "code": -32603, 00:10:58.824 "message": "Unable to find target foobar" 00:10:58.824 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:58.824 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:58.824 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27559 00:10:59.389 [2024-07-25 10:17:48.869522] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27559: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:59.389 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:59.389 { 00:10:59.389 "nqn": "nqn.2016-06.io.spdk:cnode27559", 00:10:59.389 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:59.389 "method": "nvmf_create_subsystem", 00:10:59.389 "req_id": 1 00:10:59.389 } 00:10:59.389 Got JSON-RPC error response 00:10:59.389 response: 00:10:59.389 { 00:10:59.389 "code": -32602, 00:10:59.389 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:59.389 }' 00:10:59.389 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:59.389 { 00:10:59.389 "nqn": "nqn.2016-06.io.spdk:cnode27559", 00:10:59.389 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:59.389 "method": "nvmf_create_subsystem", 00:10:59.389 "req_id": 1 00:10:59.389 } 00:10:59.389 Got JSON-RPC error response 00:10:59.389 response: 00:10:59.389 { 00:10:59.389 "code": -32602, 00:10:59.389 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:59.389 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:59.389 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:59.389 10:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22192 00:10:59.647 [2024-07-25 10:17:49.174522] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22192: invalid model number 'SPDK_Controller' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:59.647 { 00:10:59.647 "nqn": "nqn.2016-06.io.spdk:cnode22192", 00:10:59.647 "model_number": "SPDK_Controller\u001f", 00:10:59.647 "method": "nvmf_create_subsystem", 00:10:59.647 "req_id": 1 00:10:59.647 } 00:10:59.647 Got JSON-RPC error response 00:10:59.647 response: 00:10:59.647 { 00:10:59.647 "code": -32602, 00:10:59.647 "message": "Invalid MN SPDK_Controller\u001f" 00:10:59.647 }' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:59.647 { 00:10:59.647 "nqn": "nqn.2016-06.io.spdk:cnode22192", 00:10:59.647 "model_number": "SPDK_Controller\u001f", 00:10:59.647 "method": "nvmf_create_subsystem", 00:10:59.647 "req_id": 1 00:10:59.647 } 00:10:59.647 Got JSON-RPC error response 00:10:59.647 response: 00:10:59.647 { 00:10:59.647 "code": -32602, 00:10:59.647 "message": "Invalid MN SPDK_Controller\u001f" 00:10:59.647 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.647 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5 Q[Fs,:*3O.]n?fb:R' 00:10:59.648 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '5 Q[Fs,:*3O.]n?fb:R' nqn.2016-06.io.spdk:cnode19211 00:10:59.906 [2024-07-25 10:17:49.547672] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19211: invalid serial number '5 Q[Fs,:*3O.]n?fb:R' 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:59.906 { 00:10:59.906 "nqn": "nqn.2016-06.io.spdk:cnode19211", 00:10:59.906 "serial_number": "5 Q[Fs,:*3O\u007f.]n\u007f?fb:R", 00:10:59.906 "method": "nvmf_create_subsystem", 00:10:59.906 "req_id": 1 00:10:59.906 } 00:10:59.906 Got JSON-RPC error response 00:10:59.906 response: 00:10:59.906 { 00:10:59.906 "code": -32602, 00:10:59.906 "message": "Invalid SN 5 Q[Fs,:*3O\u007f.]n\u007f?fb:R" 00:10:59.906 }' 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:59.906 { 00:10:59.906 "nqn": "nqn.2016-06.io.spdk:cnode19211", 00:10:59.906 "serial_number": "5 Q[Fs,:*3O\u007f.]n\u007f?fb:R", 00:10:59.906 "method": "nvmf_create_subsystem", 00:10:59.906 "req_id": 1 00:10:59.906 } 00:10:59.906 Got JSON-RPC error response 00:10:59.906 response: 00:10:59.906 { 00:10:59.906 "code": -32602, 00:10:59.906 "message": "Invalid SN 5 Q[Fs,:*3O\u007f.]n\u007f?fb:R" 00:10:59.906 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.906 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:59.907 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.908 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm.fIj?>qD*`qj|<\&DgG]F2>pn,|fZq2kiD5On`' 00:11:00.167 10:17:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm.fIj?>qD*`qj|<\&DgG]F2>pn,|fZq2kiD5On`' nqn.2016-06.io.spdk:cnode13043 00:11:00.426 [2024-07-25 10:17:49.985105] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13043: invalid model number 'm.fIj?>qD*`qj|<\&DgG]F2>pn,|fZq2kiD5On`' 00:11:00.426 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:00.426 { 00:11:00.426 "nqn": "nqn.2016-06.io.spdk:cnode13043", 00:11:00.426 "model_number": "m.fIj?>qD*`qj|<\\&Dg\u007fG]F2>pn,\u007f|fZq2kiD5On`", 00:11:00.426 "method": "nvmf_create_subsystem", 00:11:00.426 "req_id": 1 00:11:00.426 } 00:11:00.426 Got JSON-RPC error response 00:11:00.426 response: 00:11:00.426 { 00:11:00.426 "code": -32602, 00:11:00.426 "message": "Invalid MN m.fIj?>qD*`qj|<\\&Dg\u007fG]F2>pn,\u007f|fZq2kiD5On`" 00:11:00.426 }' 00:11:00.426 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:00.426 { 00:11:00.426 "nqn": "nqn.2016-06.io.spdk:cnode13043", 00:11:00.426 "model_number": "m.fIj?>qD*`qj|<\\&Dg\u007fG]F2>pn,\u007f|fZq2kiD5On`", 00:11:00.426 "method": "nvmf_create_subsystem", 00:11:00.426 "req_id": 1 00:11:00.426 } 00:11:00.426 Got JSON-RPC error response 00:11:00.426 response: 00:11:00.426 { 00:11:00.426 "code": -32602, 00:11:00.426 "message": "Invalid MN m.fIj?>qD*`qj|<\\&Dg\u007fG]F2>pn,\u007f|fZq2kiD5On`" 00:11:00.426 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:00.426 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:00.684 [2024-07-25 10:17:50.306280] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.684 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:00.942 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:00.942 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:00.942 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:00.942 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:00.942 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:01.200 [2024-07-25 10:17:50.908177] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:01.200 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:01.200 { 00:11:01.200 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:01.200 "listen_address": { 00:11:01.200 "trtype": "tcp", 00:11:01.200 "traddr": "", 00:11:01.200 "trsvcid": "4421" 00:11:01.200 }, 00:11:01.200 "method": "nvmf_subsystem_remove_listener", 00:11:01.200 "req_id": 1 00:11:01.200 } 00:11:01.200 Got JSON-RPC error response 00:11:01.200 response: 00:11:01.200 { 00:11:01.200 "code": -32602, 00:11:01.200 "message": "Invalid parameters" 00:11:01.200 }' 00:11:01.200 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:01.200 { 00:11:01.200 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:01.200 "listen_address": { 00:11:01.200 "trtype": "tcp", 00:11:01.200 "traddr": "", 00:11:01.200 "trsvcid": "4421" 00:11:01.200 }, 00:11:01.200 "method": "nvmf_subsystem_remove_listener", 00:11:01.200 "req_id": 1 00:11:01.200 } 00:11:01.200 Got JSON-RPC error response 00:11:01.200 response: 00:11:01.200 { 00:11:01.200 "code": -32602, 00:11:01.201 "message": "Invalid parameters" 00:11:01.201 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:01.201 10:17:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31070 -i 0 00:11:01.459 [2024-07-25 10:17:51.156979] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31070: invalid cntlid range [0-65519] 00:11:01.459 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:01.459 { 00:11:01.459 "nqn": "nqn.2016-06.io.spdk:cnode31070", 00:11:01.459 "min_cntlid": 0, 00:11:01.459 "method": "nvmf_create_subsystem", 00:11:01.459 "req_id": 1 00:11:01.459 } 00:11:01.459 Got JSON-RPC error response 00:11:01.459 response: 00:11:01.459 { 00:11:01.459 "code": -32602, 00:11:01.459 "message": "Invalid cntlid range [0-65519]" 00:11:01.459 }' 00:11:01.459 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:01.459 { 00:11:01.459 "nqn": "nqn.2016-06.io.spdk:cnode31070", 00:11:01.459 "min_cntlid": 0, 00:11:01.459 "method": "nvmf_create_subsystem", 00:11:01.459 "req_id": 1 00:11:01.459 } 00:11:01.459 Got JSON-RPC error response 00:11:01.459 response: 00:11:01.459 { 00:11:01.459 "code": -32602, 00:11:01.459 "message": "Invalid cntlid range [0-65519]" 00:11:01.459 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.459 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27324 -i 65520 00:11:01.716 [2024-07-25 10:17:51.417819] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27324: invalid cntlid range [65520-65519] 00:11:01.716 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:01.716 { 00:11:01.716 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:11:01.716 "min_cntlid": 65520, 00:11:01.716 "method": "nvmf_create_subsystem", 00:11:01.716 "req_id": 1 00:11:01.716 } 00:11:01.716 Got JSON-RPC error response 00:11:01.716 response: 00:11:01.716 { 00:11:01.716 "code": -32602, 00:11:01.716 "message": "Invalid cntlid range [65520-65519]" 00:11:01.716 }' 00:11:01.716 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:01.716 { 00:11:01.716 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:11:01.716 "min_cntlid": 65520, 00:11:01.716 "method": "nvmf_create_subsystem", 00:11:01.716 "req_id": 1 00:11:01.717 } 00:11:01.717 Got JSON-RPC error response 00:11:01.717 response: 00:11:01.717 { 00:11:01.717 "code": -32602, 00:11:01.717 "message": "Invalid cntlid range [65520-65519]" 00:11:01.717 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.717 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28461 -I 0 00:11:01.974 [2024-07-25 10:17:51.654614] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28461: invalid cntlid range [1-0] 00:11:01.974 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:01.974 { 00:11:01.974 "nqn": "nqn.2016-06.io.spdk:cnode28461", 00:11:01.975 "max_cntlid": 0, 00:11:01.975 "method": "nvmf_create_subsystem", 00:11:01.975 "req_id": 1 00:11:01.975 } 00:11:01.975 Got JSON-RPC error response 00:11:01.975 response: 00:11:01.975 { 00:11:01.975 "code": -32602, 00:11:01.975 "message": "Invalid cntlid range [1-0]" 00:11:01.975 }' 00:11:01.975 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:01.975 { 00:11:01.975 "nqn": "nqn.2016-06.io.spdk:cnode28461", 00:11:01.975 "max_cntlid": 0, 00:11:01.975 "method": "nvmf_create_subsystem", 00:11:01.975 "req_id": 1 00:11:01.975 } 00:11:01.975 Got JSON-RPC error response 00:11:01.975 response: 00:11:01.975 { 00:11:01.975 "code": -32602, 00:11:01.975 "message": "Invalid cntlid range [1-0]" 00:11:01.975 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.975 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22666 -I 65520 00:11:02.233 [2024-07-25 10:17:51.907452] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22666: invalid cntlid range [1-65520] 00:11:02.233 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:02.233 { 00:11:02.233 "nqn": "nqn.2016-06.io.spdk:cnode22666", 00:11:02.233 "max_cntlid": 65520, 00:11:02.233 "method": "nvmf_create_subsystem", 00:11:02.233 "req_id": 1 00:11:02.233 } 00:11:02.233 Got JSON-RPC error response 00:11:02.233 response: 00:11:02.233 { 00:11:02.233 "code": -32602, 00:11:02.233 "message": "Invalid cntlid range [1-65520]" 00:11:02.233 }' 00:11:02.233 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:02.233 { 00:11:02.233 "nqn": "nqn.2016-06.io.spdk:cnode22666", 00:11:02.233 "max_cntlid": 65520, 00:11:02.233 "method": "nvmf_create_subsystem", 00:11:02.233 "req_id": 1 00:11:02.233 } 00:11:02.233 Got JSON-RPC error response 00:11:02.233 response: 00:11:02.233 { 00:11:02.233 "code": -32602, 00:11:02.233 "message": "Invalid cntlid range [1-65520]" 00:11:02.233 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:02.233 10:17:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22053 -i 6 -I 5 00:11:02.490 [2024-07-25 10:17:52.144240] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22053: invalid cntlid range [6-5] 00:11:02.490 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:02.490 { 00:11:02.490 "nqn": "nqn.2016-06.io.spdk:cnode22053", 00:11:02.490 "min_cntlid": 6, 00:11:02.490 "max_cntlid": 5, 00:11:02.490 "method": "nvmf_create_subsystem", 00:11:02.490 "req_id": 1 00:11:02.490 } 00:11:02.490 Got JSON-RPC error response 00:11:02.490 response: 00:11:02.490 { 00:11:02.490 "code": -32602, 00:11:02.490 "message": "Invalid cntlid range [6-5]" 00:11:02.490 }' 00:11:02.490 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:02.490 { 00:11:02.490 "nqn": "nqn.2016-06.io.spdk:cnode22053", 00:11:02.490 "min_cntlid": 6, 00:11:02.490 "max_cntlid": 5, 00:11:02.490 "method": "nvmf_create_subsystem", 00:11:02.490 "req_id": 1 00:11:02.490 } 00:11:02.490 Got JSON-RPC error response 00:11:02.490 response: 00:11:02.490 { 00:11:02.490 "code": -32602, 00:11:02.490 "message": "Invalid cntlid range [6-5]" 00:11:02.490 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:02.490 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:02.748 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:02.748 { 00:11:02.748 "name": "foobar", 00:11:02.748 "method": "nvmf_delete_target", 00:11:02.748 "req_id": 1 00:11:02.748 } 00:11:02.748 Got JSON-RPC error response 00:11:02.748 response: 00:11:02.748 { 00:11:02.748 "code": -32602, 00:11:02.748 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:02.748 }' 00:11:02.748 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:02.748 { 00:11:02.748 "name": "foobar", 00:11:02.748 "method": "nvmf_delete_target", 00:11:02.748 "req_id": 1 00:11:02.748 } 00:11:02.748 Got JSON-RPC error response 00:11:02.749 response: 00:11:02.749 { 00:11:02.749 "code": -32602, 00:11:02.749 "message": "The specified target doesn't exist, cannot delete it." 00:11:02.749 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.749 rmmod nvme_tcp 00:11:02.749 rmmod nvme_fabrics 00:11:02.749 rmmod nvme_keyring 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1471704 ']' 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1471704 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1471704 ']' 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1471704 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471704 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471704' 00:11:02.749 killing process with pid 1471704 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1471704 00:11:02.749 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1471704 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.009 10:17:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.914 00:11:04.914 real 0m8.716s 00:11:04.914 user 0m21.740s 00:11:04.914 sys 0m2.204s 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:04.914 ************************************ 00:11:04.914 END TEST nvmf_invalid 00:11:04.914 ************************************ 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.914 ************************************ 00:11:04.914 START TEST nvmf_connect_stress 00:11:04.914 ************************************ 00:11:04.914 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:05.173 * Looking for test storage... 00:11:05.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.173 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:07.078 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:07.078 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.078 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:07.079 Found net devices under 0000:08:00.0: cvl_0_0 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:07.079 Found net devices under 0000:08:00.1: cvl_0_1 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:11:07.079 00:11:07.079 --- 10.0.0.2 ping statistics --- 00:11:07.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.079 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:11:07.079 00:11:07.079 --- 10.0.0.1 ping statistics --- 00:11:07.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.079 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1473774 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1473774 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1473774 ']' 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.079 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.079 [2024-07-25 10:17:56.578171] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:07.079 [2024-07-25 10:17:56.578266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.079 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.079 [2024-07-25 10:17:56.644784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.079 [2024-07-25 10:17:56.760817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.079 [2024-07-25 10:17:56.760881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.079 [2024-07-25 10:17:56.760897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.079 [2024-07-25 10:17:56.760910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.079 [2024-07-25 10:17:56.760922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.079 [2024-07-25 10:17:56.761003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.079 [2024-07-25 10:17:56.761058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.079 [2024-07-25 10:17:56.761062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 [2024-07-25 10:17:56.893355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 [2024-07-25 10:17:56.926773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 NULL1 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1473885 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.338 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.339 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.597 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.597 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:07.597 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.597 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.597 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.855 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:07.855 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.855 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.855 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.420 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.420 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:08.420 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.420 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.420 10:17:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.678 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.678 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:08.678 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.678 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.678 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.935 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.935 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:08.935 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.935 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.935 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.192 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.192 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:09.192 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.192 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.192 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.758 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.758 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:09.758 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.758 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.758 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.015 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.015 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:10.015 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.015 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.015 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.273 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.273 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:10.273 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.273 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.273 10:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.530 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:10.530 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.530 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.530 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.788 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.788 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:10.788 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.788 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.788 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.353 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.353 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:11.353 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.353 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.353 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.611 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.611 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:11.611 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.611 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.611 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.868 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:11.868 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.868 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.868 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.126 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.126 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:12.126 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.126 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.126 10:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.383 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.383 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:12.383 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.383 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.383 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.949 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.949 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:12.949 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.949 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.949 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.207 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.207 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:13.207 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.207 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.207 10:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.465 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.465 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:13.465 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.465 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.465 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.729 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.729 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:13.729 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.729 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.729 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.991 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:13.991 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.991 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.991 10:18:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.556 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.556 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:14.556 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.556 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.556 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.813 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.814 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:14.814 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.814 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.814 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.071 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.071 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:15.071 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.071 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.072 10:18:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.329 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.329 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:15.329 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.329 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.329 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.587 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.587 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:15.587 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.587 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.587 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.153 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.153 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:16.153 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.153 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.153 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.411 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.411 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:16.411 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.411 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.411 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.669 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.669 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:16.669 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.669 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.669 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.927 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.927 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:16.927 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.927 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.927 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.185 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.185 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:17.185 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.185 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.185 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.443 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1473885 00:11:17.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1473885) - No such process 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1473885 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.702 rmmod nvme_tcp 00:11:17.702 rmmod nvme_fabrics 00:11:17.702 rmmod nvme_keyring 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1473774 ']' 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1473774 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1473774 ']' 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1473774 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473774 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473774' 00:11:17.702 killing process with pid 1473774 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1473774 00:11:17.702 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1473774 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.962 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.954 00:11:19.954 real 0m14.925s 00:11:19.954 user 0m38.276s 00:11:19.954 sys 0m5.429s 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.954 ************************************ 00:11:19.954 END TEST nvmf_connect_stress 00:11:19.954 ************************************ 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.954 ************************************ 00:11:19.954 START TEST nvmf_fused_ordering 00:11:19.954 ************************************ 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.954 * Looking for test storage... 00:11:19.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.954 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.214 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:21.592 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:21.593 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:21.593 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:21.593 Found net devices under 0000:08:00.0: cvl_0_0 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:21.593 Found net devices under 0000:08:00.1: cvl_0_1 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.593 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:11:21.852 00:11:21.852 --- 10.0.0.2 ping statistics --- 00:11:21.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.852 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:11:21.852 00:11:21.852 --- 10.0.0.1 ping statistics --- 00:11:21.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.852 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1476935 00:11:21.852 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1476935 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1476935 ']' 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.853 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.853 [2024-07-25 10:18:11.491146] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:21.853 [2024-07-25 10:18:11.491242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.853 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.853 [2024-07-25 10:18:11.557147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.111 [2024-07-25 10:18:11.676048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.111 [2024-07-25 10:18:11.676117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.111 [2024-07-25 10:18:11.676133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.111 [2024-07-25 10:18:11.676146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.111 [2024-07-25 10:18:11.676158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.111 [2024-07-25 10:18:11.676190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 [2024-07-25 10:18:11.808194] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 [2024-07-25 10:18:11.824374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 NULL1 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.111 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.112 10:18:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.112 [2024-07-25 10:18:11.870648] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:22.112 [2024-07-25 10:18:11.870701] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476956 ] 00:11:22.370 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.628 Attached to nqn.2016-06.io.spdk:cnode1 00:11:22.628 Namespace ID: 1 size: 1GB 00:11:22.628 fused_ordering(0) 00:11:22.628 fused_ordering(1) 00:11:22.628 fused_ordering(2) 00:11:22.628 fused_ordering(3) 00:11:22.628 fused_ordering(4) 00:11:22.628 fused_ordering(5) 00:11:22.628 fused_ordering(6) 00:11:22.628 fused_ordering(7) 00:11:22.628 fused_ordering(8) 00:11:22.628 fused_ordering(9) 00:11:22.628 fused_ordering(10) 00:11:22.628 fused_ordering(11) 00:11:22.628 fused_ordering(12) 00:11:22.628 fused_ordering(13) 00:11:22.628 fused_ordering(14) 00:11:22.628 fused_ordering(15) 00:11:22.628 fused_ordering(16) 00:11:22.628 fused_ordering(17) 00:11:22.628 fused_ordering(18) 00:11:22.628 fused_ordering(19) 00:11:22.628 fused_ordering(20) 00:11:22.628 fused_ordering(21) 00:11:22.628 fused_ordering(22) 00:11:22.628 fused_ordering(23) 00:11:22.628 fused_ordering(24) 00:11:22.628 fused_ordering(25) 00:11:22.628 fused_ordering(26) 00:11:22.628 fused_ordering(27) 00:11:22.628 fused_ordering(28) 00:11:22.628 fused_ordering(29) 00:11:22.628 fused_ordering(30) 00:11:22.628 fused_ordering(31) 00:11:22.628 fused_ordering(32) 00:11:22.628 fused_ordering(33) 00:11:22.628 fused_ordering(34) 00:11:22.628 fused_ordering(35) 00:11:22.628 fused_ordering(36) 00:11:22.628 fused_ordering(37) 00:11:22.628 fused_ordering(38) 00:11:22.628 fused_ordering(39) 00:11:22.628 fused_ordering(40) 00:11:22.628 fused_ordering(41) 00:11:22.628 fused_ordering(42) 00:11:22.628 fused_ordering(43) 00:11:22.628 fused_ordering(44) 00:11:22.628 fused_ordering(45) 00:11:22.628 fused_ordering(46) 00:11:22.628 fused_ordering(47) 00:11:22.628 fused_ordering(48) 00:11:22.628 fused_ordering(49) 00:11:22.628 fused_ordering(50) 00:11:22.628 fused_ordering(51) 00:11:22.628 fused_ordering(52) 00:11:22.628 fused_ordering(53) 00:11:22.628 fused_ordering(54) 00:11:22.628 fused_ordering(55) 00:11:22.628 fused_ordering(56) 00:11:22.628 fused_ordering(57) 00:11:22.628 fused_ordering(58) 00:11:22.628 fused_ordering(59) 00:11:22.628 fused_ordering(60) 00:11:22.628 fused_ordering(61) 00:11:22.628 fused_ordering(62) 00:11:22.628 fused_ordering(63) 00:11:22.629 fused_ordering(64) 00:11:22.629 fused_ordering(65) 00:11:22.629 fused_ordering(66) 00:11:22.629 fused_ordering(67) 00:11:22.629 fused_ordering(68) 00:11:22.629 fused_ordering(69) 00:11:22.629 fused_ordering(70) 00:11:22.629 fused_ordering(71) 00:11:22.629 fused_ordering(72) 00:11:22.629 fused_ordering(73) 00:11:22.629 fused_ordering(74) 00:11:22.629 fused_ordering(75) 00:11:22.629 fused_ordering(76) 00:11:22.629 fused_ordering(77) 00:11:22.629 fused_ordering(78) 00:11:22.629 fused_ordering(79) 00:11:22.629 fused_ordering(80) 00:11:22.629 fused_ordering(81) 00:11:22.629 fused_ordering(82) 00:11:22.629 fused_ordering(83) 00:11:22.629 fused_ordering(84) 00:11:22.629 fused_ordering(85) 00:11:22.629 fused_ordering(86) 00:11:22.629 fused_ordering(87) 00:11:22.629 fused_ordering(88) 00:11:22.629 fused_ordering(89) 00:11:22.629 fused_ordering(90) 00:11:22.629 fused_ordering(91) 00:11:22.629 fused_ordering(92) 00:11:22.629 fused_ordering(93) 00:11:22.629 fused_ordering(94) 00:11:22.629 fused_ordering(95) 00:11:22.629 fused_ordering(96) 00:11:22.629 fused_ordering(97) 00:11:22.629 fused_ordering(98) 00:11:22.629 fused_ordering(99) 00:11:22.629 fused_ordering(100) 00:11:22.629 fused_ordering(101) 00:11:22.629 fused_ordering(102) 00:11:22.629 fused_ordering(103) 00:11:22.629 fused_ordering(104) 00:11:22.629 fused_ordering(105) 00:11:22.629 fused_ordering(106) 00:11:22.629 fused_ordering(107) 00:11:22.629 fused_ordering(108) 00:11:22.629 fused_ordering(109) 00:11:22.629 fused_ordering(110) 00:11:22.629 fused_ordering(111) 00:11:22.629 fused_ordering(112) 00:11:22.629 fused_ordering(113) 00:11:22.629 fused_ordering(114) 00:11:22.629 fused_ordering(115) 00:11:22.629 fused_ordering(116) 00:11:22.629 fused_ordering(117) 00:11:22.629 fused_ordering(118) 00:11:22.629 fused_ordering(119) 00:11:22.629 fused_ordering(120) 00:11:22.629 fused_ordering(121) 00:11:22.629 fused_ordering(122) 00:11:22.629 fused_ordering(123) 00:11:22.629 fused_ordering(124) 00:11:22.629 fused_ordering(125) 00:11:22.629 fused_ordering(126) 00:11:22.629 fused_ordering(127) 00:11:22.629 fused_ordering(128) 00:11:22.629 fused_ordering(129) 00:11:22.629 fused_ordering(130) 00:11:22.629 fused_ordering(131) 00:11:22.629 fused_ordering(132) 00:11:22.629 fused_ordering(133) 00:11:22.629 fused_ordering(134) 00:11:22.629 fused_ordering(135) 00:11:22.629 fused_ordering(136) 00:11:22.629 fused_ordering(137) 00:11:22.629 fused_ordering(138) 00:11:22.629 fused_ordering(139) 00:11:22.629 fused_ordering(140) 00:11:22.629 fused_ordering(141) 00:11:22.629 fused_ordering(142) 00:11:22.629 fused_ordering(143) 00:11:22.629 fused_ordering(144) 00:11:22.629 fused_ordering(145) 00:11:22.629 fused_ordering(146) 00:11:22.629 fused_ordering(147) 00:11:22.629 fused_ordering(148) 00:11:22.629 fused_ordering(149) 00:11:22.629 fused_ordering(150) 00:11:22.629 fused_ordering(151) 00:11:22.629 fused_ordering(152) 00:11:22.629 fused_ordering(153) 00:11:22.629 fused_ordering(154) 00:11:22.629 fused_ordering(155) 00:11:22.629 fused_ordering(156) 00:11:22.629 fused_ordering(157) 00:11:22.629 fused_ordering(158) 00:11:22.629 fused_ordering(159) 00:11:22.629 fused_ordering(160) 00:11:22.629 fused_ordering(161) 00:11:22.629 fused_ordering(162) 00:11:22.629 fused_ordering(163) 00:11:22.629 fused_ordering(164) 00:11:22.629 fused_ordering(165) 00:11:22.629 fused_ordering(166) 00:11:22.629 fused_ordering(167) 00:11:22.629 fused_ordering(168) 00:11:22.629 fused_ordering(169) 00:11:22.629 fused_ordering(170) 00:11:22.629 fused_ordering(171) 00:11:22.629 fused_ordering(172) 00:11:22.629 fused_ordering(173) 00:11:22.629 fused_ordering(174) 00:11:22.629 fused_ordering(175) 00:11:22.629 fused_ordering(176) 00:11:22.629 fused_ordering(177) 00:11:22.629 fused_ordering(178) 00:11:22.629 fused_ordering(179) 00:11:22.629 fused_ordering(180) 00:11:22.629 fused_ordering(181) 00:11:22.629 fused_ordering(182) 00:11:22.629 fused_ordering(183) 00:11:22.629 fused_ordering(184) 00:11:22.629 fused_ordering(185) 00:11:22.629 fused_ordering(186) 00:11:22.629 fused_ordering(187) 00:11:22.629 fused_ordering(188) 00:11:22.629 fused_ordering(189) 00:11:22.629 fused_ordering(190) 00:11:22.629 fused_ordering(191) 00:11:22.629 fused_ordering(192) 00:11:22.629 fused_ordering(193) 00:11:22.629 fused_ordering(194) 00:11:22.629 fused_ordering(195) 00:11:22.629 fused_ordering(196) 00:11:22.629 fused_ordering(197) 00:11:22.629 fused_ordering(198) 00:11:22.629 fused_ordering(199) 00:11:22.629 fused_ordering(200) 00:11:22.629 fused_ordering(201) 00:11:22.629 fused_ordering(202) 00:11:22.629 fused_ordering(203) 00:11:22.629 fused_ordering(204) 00:11:22.629 fused_ordering(205) 00:11:23.196 fused_ordering(206) 00:11:23.196 fused_ordering(207) 00:11:23.196 fused_ordering(208) 00:11:23.196 fused_ordering(209) 00:11:23.196 fused_ordering(210) 00:11:23.196 fused_ordering(211) 00:11:23.196 fused_ordering(212) 00:11:23.196 fused_ordering(213) 00:11:23.196 fused_ordering(214) 00:11:23.196 fused_ordering(215) 00:11:23.196 fused_ordering(216) 00:11:23.196 fused_ordering(217) 00:11:23.196 fused_ordering(218) 00:11:23.196 fused_ordering(219) 00:11:23.196 fused_ordering(220) 00:11:23.196 fused_ordering(221) 00:11:23.196 fused_ordering(222) 00:11:23.196 fused_ordering(223) 00:11:23.196 fused_ordering(224) 00:11:23.196 fused_ordering(225) 00:11:23.196 fused_ordering(226) 00:11:23.196 fused_ordering(227) 00:11:23.196 fused_ordering(228) 00:11:23.196 fused_ordering(229) 00:11:23.196 fused_ordering(230) 00:11:23.196 fused_ordering(231) 00:11:23.196 fused_ordering(232) 00:11:23.196 fused_ordering(233) 00:11:23.196 fused_ordering(234) 00:11:23.196 fused_ordering(235) 00:11:23.196 fused_ordering(236) 00:11:23.196 fused_ordering(237) 00:11:23.196 fused_ordering(238) 00:11:23.196 fused_ordering(239) 00:11:23.196 fused_ordering(240) 00:11:23.196 fused_ordering(241) 00:11:23.196 fused_ordering(242) 00:11:23.196 fused_ordering(243) 00:11:23.196 fused_ordering(244) 00:11:23.196 fused_ordering(245) 00:11:23.196 fused_ordering(246) 00:11:23.196 fused_ordering(247) 00:11:23.196 fused_ordering(248) 00:11:23.196 fused_ordering(249) 00:11:23.196 fused_ordering(250) 00:11:23.196 fused_ordering(251) 00:11:23.196 fused_ordering(252) 00:11:23.196 fused_ordering(253) 00:11:23.196 fused_ordering(254) 00:11:23.196 fused_ordering(255) 00:11:23.196 fused_ordering(256) 00:11:23.196 fused_ordering(257) 00:11:23.196 fused_ordering(258) 00:11:23.196 fused_ordering(259) 00:11:23.196 fused_ordering(260) 00:11:23.196 fused_ordering(261) 00:11:23.196 fused_ordering(262) 00:11:23.196 fused_ordering(263) 00:11:23.196 fused_ordering(264) 00:11:23.196 fused_ordering(265) 00:11:23.196 fused_ordering(266) 00:11:23.196 fused_ordering(267) 00:11:23.196 fused_ordering(268) 00:11:23.196 fused_ordering(269) 00:11:23.196 fused_ordering(270) 00:11:23.196 fused_ordering(271) 00:11:23.196 fused_ordering(272) 00:11:23.196 fused_ordering(273) 00:11:23.196 fused_ordering(274) 00:11:23.196 fused_ordering(275) 00:11:23.196 fused_ordering(276) 00:11:23.196 fused_ordering(277) 00:11:23.196 fused_ordering(278) 00:11:23.196 fused_ordering(279) 00:11:23.196 fused_ordering(280) 00:11:23.196 fused_ordering(281) 00:11:23.196 fused_ordering(282) 00:11:23.196 fused_ordering(283) 00:11:23.196 fused_ordering(284) 00:11:23.196 fused_ordering(285) 00:11:23.196 fused_ordering(286) 00:11:23.196 fused_ordering(287) 00:11:23.196 fused_ordering(288) 00:11:23.196 fused_ordering(289) 00:11:23.196 fused_ordering(290) 00:11:23.196 fused_ordering(291) 00:11:23.196 fused_ordering(292) 00:11:23.196 fused_ordering(293) 00:11:23.196 fused_ordering(294) 00:11:23.196 fused_ordering(295) 00:11:23.196 fused_ordering(296) 00:11:23.196 fused_ordering(297) 00:11:23.196 fused_ordering(298) 00:11:23.196 fused_ordering(299) 00:11:23.196 fused_ordering(300) 00:11:23.196 fused_ordering(301) 00:11:23.196 fused_ordering(302) 00:11:23.196 fused_ordering(303) 00:11:23.196 fused_ordering(304) 00:11:23.196 fused_ordering(305) 00:11:23.196 fused_ordering(306) 00:11:23.196 fused_ordering(307) 00:11:23.196 fused_ordering(308) 00:11:23.196 fused_ordering(309) 00:11:23.196 fused_ordering(310) 00:11:23.196 fused_ordering(311) 00:11:23.196 fused_ordering(312) 00:11:23.196 fused_ordering(313) 00:11:23.196 fused_ordering(314) 00:11:23.196 fused_ordering(315) 00:11:23.196 fused_ordering(316) 00:11:23.196 fused_ordering(317) 00:11:23.196 fused_ordering(318) 00:11:23.196 fused_ordering(319) 00:11:23.196 fused_ordering(320) 00:11:23.196 fused_ordering(321) 00:11:23.196 fused_ordering(322) 00:11:23.196 fused_ordering(323) 00:11:23.196 fused_ordering(324) 00:11:23.196 fused_ordering(325) 00:11:23.196 fused_ordering(326) 00:11:23.196 fused_ordering(327) 00:11:23.196 fused_ordering(328) 00:11:23.196 fused_ordering(329) 00:11:23.196 fused_ordering(330) 00:11:23.196 fused_ordering(331) 00:11:23.196 fused_ordering(332) 00:11:23.196 fused_ordering(333) 00:11:23.196 fused_ordering(334) 00:11:23.196 fused_ordering(335) 00:11:23.196 fused_ordering(336) 00:11:23.196 fused_ordering(337) 00:11:23.196 fused_ordering(338) 00:11:23.196 fused_ordering(339) 00:11:23.196 fused_ordering(340) 00:11:23.196 fused_ordering(341) 00:11:23.196 fused_ordering(342) 00:11:23.196 fused_ordering(343) 00:11:23.196 fused_ordering(344) 00:11:23.196 fused_ordering(345) 00:11:23.196 fused_ordering(346) 00:11:23.196 fused_ordering(347) 00:11:23.196 fused_ordering(348) 00:11:23.196 fused_ordering(349) 00:11:23.196 fused_ordering(350) 00:11:23.196 fused_ordering(351) 00:11:23.196 fused_ordering(352) 00:11:23.196 fused_ordering(353) 00:11:23.196 fused_ordering(354) 00:11:23.196 fused_ordering(355) 00:11:23.196 fused_ordering(356) 00:11:23.196 fused_ordering(357) 00:11:23.196 fused_ordering(358) 00:11:23.196 fused_ordering(359) 00:11:23.196 fused_ordering(360) 00:11:23.196 fused_ordering(361) 00:11:23.196 fused_ordering(362) 00:11:23.196 fused_ordering(363) 00:11:23.196 fused_ordering(364) 00:11:23.196 fused_ordering(365) 00:11:23.196 fused_ordering(366) 00:11:23.196 fused_ordering(367) 00:11:23.196 fused_ordering(368) 00:11:23.196 fused_ordering(369) 00:11:23.196 fused_ordering(370) 00:11:23.196 fused_ordering(371) 00:11:23.196 fused_ordering(372) 00:11:23.196 fused_ordering(373) 00:11:23.196 fused_ordering(374) 00:11:23.196 fused_ordering(375) 00:11:23.196 fused_ordering(376) 00:11:23.196 fused_ordering(377) 00:11:23.196 fused_ordering(378) 00:11:23.196 fused_ordering(379) 00:11:23.196 fused_ordering(380) 00:11:23.196 fused_ordering(381) 00:11:23.196 fused_ordering(382) 00:11:23.196 fused_ordering(383) 00:11:23.196 fused_ordering(384) 00:11:23.196 fused_ordering(385) 00:11:23.196 fused_ordering(386) 00:11:23.196 fused_ordering(387) 00:11:23.196 fused_ordering(388) 00:11:23.196 fused_ordering(389) 00:11:23.196 fused_ordering(390) 00:11:23.196 fused_ordering(391) 00:11:23.196 fused_ordering(392) 00:11:23.196 fused_ordering(393) 00:11:23.196 fused_ordering(394) 00:11:23.196 fused_ordering(395) 00:11:23.196 fused_ordering(396) 00:11:23.196 fused_ordering(397) 00:11:23.196 fused_ordering(398) 00:11:23.196 fused_ordering(399) 00:11:23.196 fused_ordering(400) 00:11:23.196 fused_ordering(401) 00:11:23.196 fused_ordering(402) 00:11:23.196 fused_ordering(403) 00:11:23.196 fused_ordering(404) 00:11:23.196 fused_ordering(405) 00:11:23.196 fused_ordering(406) 00:11:23.196 fused_ordering(407) 00:11:23.196 fused_ordering(408) 00:11:23.196 fused_ordering(409) 00:11:23.196 fused_ordering(410) 00:11:23.763 fused_ordering(411) 00:11:23.763 fused_ordering(412) 00:11:23.763 fused_ordering(413) 00:11:23.763 fused_ordering(414) 00:11:23.763 fused_ordering(415) 00:11:23.763 fused_ordering(416) 00:11:23.763 fused_ordering(417) 00:11:23.763 fused_ordering(418) 00:11:23.763 fused_ordering(419) 00:11:23.763 fused_ordering(420) 00:11:23.763 fused_ordering(421) 00:11:23.763 fused_ordering(422) 00:11:23.763 fused_ordering(423) 00:11:23.763 fused_ordering(424) 00:11:23.763 fused_ordering(425) 00:11:23.763 fused_ordering(426) 00:11:23.763 fused_ordering(427) 00:11:23.763 fused_ordering(428) 00:11:23.763 fused_ordering(429) 00:11:23.763 fused_ordering(430) 00:11:23.763 fused_ordering(431) 00:11:23.763 fused_ordering(432) 00:11:23.763 fused_ordering(433) 00:11:23.763 fused_ordering(434) 00:11:23.763 fused_ordering(435) 00:11:23.763 fused_ordering(436) 00:11:23.763 fused_ordering(437) 00:11:23.763 fused_ordering(438) 00:11:23.763 fused_ordering(439) 00:11:23.763 fused_ordering(440) 00:11:23.763 fused_ordering(441) 00:11:23.763 fused_ordering(442) 00:11:23.763 fused_ordering(443) 00:11:23.763 fused_ordering(444) 00:11:23.763 fused_ordering(445) 00:11:23.763 fused_ordering(446) 00:11:23.763 fused_ordering(447) 00:11:23.763 fused_ordering(448) 00:11:23.763 fused_ordering(449) 00:11:23.763 fused_ordering(450) 00:11:23.763 fused_ordering(451) 00:11:23.763 fused_ordering(452) 00:11:23.763 fused_ordering(453) 00:11:23.763 fused_ordering(454) 00:11:23.763 fused_ordering(455) 00:11:23.763 fused_ordering(456) 00:11:23.763 fused_ordering(457) 00:11:23.763 fused_ordering(458) 00:11:23.763 fused_ordering(459) 00:11:23.763 fused_ordering(460) 00:11:23.763 fused_ordering(461) 00:11:23.763 fused_ordering(462) 00:11:23.763 fused_ordering(463) 00:11:23.763 fused_ordering(464) 00:11:23.763 fused_ordering(465) 00:11:23.763 fused_ordering(466) 00:11:23.763 fused_ordering(467) 00:11:23.763 fused_ordering(468) 00:11:23.763 fused_ordering(469) 00:11:23.763 fused_ordering(470) 00:11:23.763 fused_ordering(471) 00:11:23.763 fused_ordering(472) 00:11:23.763 fused_ordering(473) 00:11:23.763 fused_ordering(474) 00:11:23.763 fused_ordering(475) 00:11:23.763 fused_ordering(476) 00:11:23.763 fused_ordering(477) 00:11:23.763 fused_ordering(478) 00:11:23.763 fused_ordering(479) 00:11:23.763 fused_ordering(480) 00:11:23.763 fused_ordering(481) 00:11:23.763 fused_ordering(482) 00:11:23.763 fused_ordering(483) 00:11:23.763 fused_ordering(484) 00:11:23.763 fused_ordering(485) 00:11:23.763 fused_ordering(486) 00:11:23.763 fused_ordering(487) 00:11:23.763 fused_ordering(488) 00:11:23.763 fused_ordering(489) 00:11:23.763 fused_ordering(490) 00:11:23.763 fused_ordering(491) 00:11:23.763 fused_ordering(492) 00:11:23.763 fused_ordering(493) 00:11:23.763 fused_ordering(494) 00:11:23.763 fused_ordering(495) 00:11:23.763 fused_ordering(496) 00:11:23.763 fused_ordering(497) 00:11:23.763 fused_ordering(498) 00:11:23.763 fused_ordering(499) 00:11:23.763 fused_ordering(500) 00:11:23.763 fused_ordering(501) 00:11:23.763 fused_ordering(502) 00:11:23.764 fused_ordering(503) 00:11:23.764 fused_ordering(504) 00:11:23.764 fused_ordering(505) 00:11:23.764 fused_ordering(506) 00:11:23.764 fused_ordering(507) 00:11:23.764 fused_ordering(508) 00:11:23.764 fused_ordering(509) 00:11:23.764 fused_ordering(510) 00:11:23.764 fused_ordering(511) 00:11:23.764 fused_ordering(512) 00:11:23.764 fused_ordering(513) 00:11:23.764 fused_ordering(514) 00:11:23.764 fused_ordering(515) 00:11:23.764 fused_ordering(516) 00:11:23.764 fused_ordering(517) 00:11:23.764 fused_ordering(518) 00:11:23.764 fused_ordering(519) 00:11:23.764 fused_ordering(520) 00:11:23.764 fused_ordering(521) 00:11:23.764 fused_ordering(522) 00:11:23.764 fused_ordering(523) 00:11:23.764 fused_ordering(524) 00:11:23.764 fused_ordering(525) 00:11:23.764 fused_ordering(526) 00:11:23.764 fused_ordering(527) 00:11:23.764 fused_ordering(528) 00:11:23.764 fused_ordering(529) 00:11:23.764 fused_ordering(530) 00:11:23.764 fused_ordering(531) 00:11:23.764 fused_ordering(532) 00:11:23.764 fused_ordering(533) 00:11:23.764 fused_ordering(534) 00:11:23.764 fused_ordering(535) 00:11:23.764 fused_ordering(536) 00:11:23.764 fused_ordering(537) 00:11:23.764 fused_ordering(538) 00:11:23.764 fused_ordering(539) 00:11:23.764 fused_ordering(540) 00:11:23.764 fused_ordering(541) 00:11:23.764 fused_ordering(542) 00:11:23.764 fused_ordering(543) 00:11:23.764 fused_ordering(544) 00:11:23.764 fused_ordering(545) 00:11:23.764 fused_ordering(546) 00:11:23.764 fused_ordering(547) 00:11:23.764 fused_ordering(548) 00:11:23.764 fused_ordering(549) 00:11:23.764 fused_ordering(550) 00:11:23.764 fused_ordering(551) 00:11:23.764 fused_ordering(552) 00:11:23.764 fused_ordering(553) 00:11:23.764 fused_ordering(554) 00:11:23.764 fused_ordering(555) 00:11:23.764 fused_ordering(556) 00:11:23.764 fused_ordering(557) 00:11:23.764 fused_ordering(558) 00:11:23.764 fused_ordering(559) 00:11:23.764 fused_ordering(560) 00:11:23.764 fused_ordering(561) 00:11:23.764 fused_ordering(562) 00:11:23.764 fused_ordering(563) 00:11:23.764 fused_ordering(564) 00:11:23.764 fused_ordering(565) 00:11:23.764 fused_ordering(566) 00:11:23.764 fused_ordering(567) 00:11:23.764 fused_ordering(568) 00:11:23.764 fused_ordering(569) 00:11:23.764 fused_ordering(570) 00:11:23.764 fused_ordering(571) 00:11:23.764 fused_ordering(572) 00:11:23.764 fused_ordering(573) 00:11:23.764 fused_ordering(574) 00:11:23.764 fused_ordering(575) 00:11:23.764 fused_ordering(576) 00:11:23.764 fused_ordering(577) 00:11:23.764 fused_ordering(578) 00:11:23.764 fused_ordering(579) 00:11:23.764 fused_ordering(580) 00:11:23.764 fused_ordering(581) 00:11:23.764 fused_ordering(582) 00:11:23.764 fused_ordering(583) 00:11:23.764 fused_ordering(584) 00:11:23.764 fused_ordering(585) 00:11:23.764 fused_ordering(586) 00:11:23.764 fused_ordering(587) 00:11:23.764 fused_ordering(588) 00:11:23.764 fused_ordering(589) 00:11:23.764 fused_ordering(590) 00:11:23.764 fused_ordering(591) 00:11:23.764 fused_ordering(592) 00:11:23.764 fused_ordering(593) 00:11:23.764 fused_ordering(594) 00:11:23.764 fused_ordering(595) 00:11:23.764 fused_ordering(596) 00:11:23.764 fused_ordering(597) 00:11:23.764 fused_ordering(598) 00:11:23.764 fused_ordering(599) 00:11:23.764 fused_ordering(600) 00:11:23.764 fused_ordering(601) 00:11:23.764 fused_ordering(602) 00:11:23.764 fused_ordering(603) 00:11:23.764 fused_ordering(604) 00:11:23.764 fused_ordering(605) 00:11:23.764 fused_ordering(606) 00:11:23.764 fused_ordering(607) 00:11:23.764 fused_ordering(608) 00:11:23.764 fused_ordering(609) 00:11:23.764 fused_ordering(610) 00:11:23.764 fused_ordering(611) 00:11:23.764 fused_ordering(612) 00:11:23.764 fused_ordering(613) 00:11:23.764 fused_ordering(614) 00:11:23.764 fused_ordering(615) 00:11:24.699 fused_ordering(616) 00:11:24.699 fused_ordering(617) 00:11:24.699 fused_ordering(618) 00:11:24.699 fused_ordering(619) 00:11:24.699 fused_ordering(620) 00:11:24.699 fused_ordering(621) 00:11:24.699 fused_ordering(622) 00:11:24.699 fused_ordering(623) 00:11:24.699 fused_ordering(624) 00:11:24.699 fused_ordering(625) 00:11:24.699 fused_ordering(626) 00:11:24.699 fused_ordering(627) 00:11:24.699 fused_ordering(628) 00:11:24.699 fused_ordering(629) 00:11:24.699 fused_ordering(630) 00:11:24.699 fused_ordering(631) 00:11:24.699 fused_ordering(632) 00:11:24.699 fused_ordering(633) 00:11:24.699 fused_ordering(634) 00:11:24.699 fused_ordering(635) 00:11:24.699 fused_ordering(636) 00:11:24.699 fused_ordering(637) 00:11:24.699 fused_ordering(638) 00:11:24.699 fused_ordering(639) 00:11:24.699 fused_ordering(640) 00:11:24.699 fused_ordering(641) 00:11:24.699 fused_ordering(642) 00:11:24.699 fused_ordering(643) 00:11:24.699 fused_ordering(644) 00:11:24.699 fused_ordering(645) 00:11:24.699 fused_ordering(646) 00:11:24.699 fused_ordering(647) 00:11:24.699 fused_ordering(648) 00:11:24.699 fused_ordering(649) 00:11:24.699 fused_ordering(650) 00:11:24.699 fused_ordering(651) 00:11:24.699 fused_ordering(652) 00:11:24.699 fused_ordering(653) 00:11:24.699 fused_ordering(654) 00:11:24.699 fused_ordering(655) 00:11:24.699 fused_ordering(656) 00:11:24.699 fused_ordering(657) 00:11:24.699 fused_ordering(658) 00:11:24.699 fused_ordering(659) 00:11:24.699 fused_ordering(660) 00:11:24.699 fused_ordering(661) 00:11:24.699 fused_ordering(662) 00:11:24.699 fused_ordering(663) 00:11:24.699 fused_ordering(664) 00:11:24.699 fused_ordering(665) 00:11:24.699 fused_ordering(666) 00:11:24.699 fused_ordering(667) 00:11:24.699 fused_ordering(668) 00:11:24.699 fused_ordering(669) 00:11:24.699 fused_ordering(670) 00:11:24.699 fused_ordering(671) 00:11:24.699 fused_ordering(672) 00:11:24.699 fused_ordering(673) 00:11:24.699 fused_ordering(674) 00:11:24.699 fused_ordering(675) 00:11:24.699 fused_ordering(676) 00:11:24.699 fused_ordering(677) 00:11:24.699 fused_ordering(678) 00:11:24.699 fused_ordering(679) 00:11:24.699 fused_ordering(680) 00:11:24.699 fused_ordering(681) 00:11:24.699 fused_ordering(682) 00:11:24.699 fused_ordering(683) 00:11:24.699 fused_ordering(684) 00:11:24.699 fused_ordering(685) 00:11:24.699 fused_ordering(686) 00:11:24.699 fused_ordering(687) 00:11:24.699 fused_ordering(688) 00:11:24.699 fused_ordering(689) 00:11:24.699 fused_ordering(690) 00:11:24.699 fused_ordering(691) 00:11:24.699 fused_ordering(692) 00:11:24.699 fused_ordering(693) 00:11:24.699 fused_ordering(694) 00:11:24.699 fused_ordering(695) 00:11:24.699 fused_ordering(696) 00:11:24.699 fused_ordering(697) 00:11:24.699 fused_ordering(698) 00:11:24.699 fused_ordering(699) 00:11:24.699 fused_ordering(700) 00:11:24.699 fused_ordering(701) 00:11:24.699 fused_ordering(702) 00:11:24.699 fused_ordering(703) 00:11:24.699 fused_ordering(704) 00:11:24.699 fused_ordering(705) 00:11:24.699 fused_ordering(706) 00:11:24.699 fused_ordering(707) 00:11:24.699 fused_ordering(708) 00:11:24.699 fused_ordering(709) 00:11:24.699 fused_ordering(710) 00:11:24.699 fused_ordering(711) 00:11:24.699 fused_ordering(712) 00:11:24.699 fused_ordering(713) 00:11:24.699 fused_ordering(714) 00:11:24.699 fused_ordering(715) 00:11:24.699 fused_ordering(716) 00:11:24.699 fused_ordering(717) 00:11:24.699 fused_ordering(718) 00:11:24.699 fused_ordering(719) 00:11:24.699 fused_ordering(720) 00:11:24.699 fused_ordering(721) 00:11:24.699 fused_ordering(722) 00:11:24.699 fused_ordering(723) 00:11:24.699 fused_ordering(724) 00:11:24.699 fused_ordering(725) 00:11:24.699 fused_ordering(726) 00:11:24.699 fused_ordering(727) 00:11:24.699 fused_ordering(728) 00:11:24.699 fused_ordering(729) 00:11:24.699 fused_ordering(730) 00:11:24.699 fused_ordering(731) 00:11:24.699 fused_ordering(732) 00:11:24.699 fused_ordering(733) 00:11:24.699 fused_ordering(734) 00:11:24.699 fused_ordering(735) 00:11:24.699 fused_ordering(736) 00:11:24.699 fused_ordering(737) 00:11:24.699 fused_ordering(738) 00:11:24.699 fused_ordering(739) 00:11:24.699 fused_ordering(740) 00:11:24.699 fused_ordering(741) 00:11:24.699 fused_ordering(742) 00:11:24.699 fused_ordering(743) 00:11:24.699 fused_ordering(744) 00:11:24.699 fused_ordering(745) 00:11:24.699 fused_ordering(746) 00:11:24.699 fused_ordering(747) 00:11:24.699 fused_ordering(748) 00:11:24.699 fused_ordering(749) 00:11:24.699 fused_ordering(750) 00:11:24.699 fused_ordering(751) 00:11:24.699 fused_ordering(752) 00:11:24.699 fused_ordering(753) 00:11:24.699 fused_ordering(754) 00:11:24.699 fused_ordering(755) 00:11:24.699 fused_ordering(756) 00:11:24.699 fused_ordering(757) 00:11:24.699 fused_ordering(758) 00:11:24.699 fused_ordering(759) 00:11:24.699 fused_ordering(760) 00:11:24.699 fused_ordering(761) 00:11:24.699 fused_ordering(762) 00:11:24.699 fused_ordering(763) 00:11:24.699 fused_ordering(764) 00:11:24.699 fused_ordering(765) 00:11:24.699 fused_ordering(766) 00:11:24.699 fused_ordering(767) 00:11:24.699 fused_ordering(768) 00:11:24.699 fused_ordering(769) 00:11:24.699 fused_ordering(770) 00:11:24.699 fused_ordering(771) 00:11:24.699 fused_ordering(772) 00:11:24.699 fused_ordering(773) 00:11:24.699 fused_ordering(774) 00:11:24.699 fused_ordering(775) 00:11:24.699 fused_ordering(776) 00:11:24.699 fused_ordering(777) 00:11:24.699 fused_ordering(778) 00:11:24.699 fused_ordering(779) 00:11:24.699 fused_ordering(780) 00:11:24.699 fused_ordering(781) 00:11:24.699 fused_ordering(782) 00:11:24.699 fused_ordering(783) 00:11:24.699 fused_ordering(784) 00:11:24.699 fused_ordering(785) 00:11:24.699 fused_ordering(786) 00:11:24.699 fused_ordering(787) 00:11:24.699 fused_ordering(788) 00:11:24.699 fused_ordering(789) 00:11:24.699 fused_ordering(790) 00:11:24.699 fused_ordering(791) 00:11:24.699 fused_ordering(792) 00:11:24.699 fused_ordering(793) 00:11:24.699 fused_ordering(794) 00:11:24.699 fused_ordering(795) 00:11:24.699 fused_ordering(796) 00:11:24.699 fused_ordering(797) 00:11:24.699 fused_ordering(798) 00:11:24.699 fused_ordering(799) 00:11:24.699 fused_ordering(800) 00:11:24.699 fused_ordering(801) 00:11:24.699 fused_ordering(802) 00:11:24.699 fused_ordering(803) 00:11:24.699 fused_ordering(804) 00:11:24.699 fused_ordering(805) 00:11:24.699 fused_ordering(806) 00:11:24.699 fused_ordering(807) 00:11:24.699 fused_ordering(808) 00:11:24.699 fused_ordering(809) 00:11:24.699 fused_ordering(810) 00:11:24.699 fused_ordering(811) 00:11:24.699 fused_ordering(812) 00:11:24.699 fused_ordering(813) 00:11:24.699 fused_ordering(814) 00:11:24.699 fused_ordering(815) 00:11:24.699 fused_ordering(816) 00:11:24.699 fused_ordering(817) 00:11:24.699 fused_ordering(818) 00:11:24.699 fused_ordering(819) 00:11:24.699 fused_ordering(820) 00:11:25.635 fused_ordering(821) 00:11:25.635 fused_ordering(822) 00:11:25.635 fused_ordering(823) 00:11:25.635 fused_ordering(824) 00:11:25.635 fused_ordering(825) 00:11:25.635 fused_ordering(826) 00:11:25.635 fused_ordering(827) 00:11:25.635 fused_ordering(828) 00:11:25.635 fused_ordering(829) 00:11:25.635 fused_ordering(830) 00:11:25.635 fused_ordering(831) 00:11:25.635 fused_ordering(832) 00:11:25.635 fused_ordering(833) 00:11:25.635 fused_ordering(834) 00:11:25.635 fused_ordering(835) 00:11:25.635 fused_ordering(836) 00:11:25.635 fused_ordering(837) 00:11:25.635 fused_ordering(838) 00:11:25.635 fused_ordering(839) 00:11:25.635 fused_ordering(840) 00:11:25.635 fused_ordering(841) 00:11:25.635 fused_ordering(842) 00:11:25.635 fused_ordering(843) 00:11:25.635 fused_ordering(844) 00:11:25.635 fused_ordering(845) 00:11:25.635 fused_ordering(846) 00:11:25.635 fused_ordering(847) 00:11:25.635 fused_ordering(848) 00:11:25.635 fused_ordering(849) 00:11:25.635 fused_ordering(850) 00:11:25.635 fused_ordering(851) 00:11:25.635 fused_ordering(852) 00:11:25.635 fused_ordering(853) 00:11:25.635 fused_ordering(854) 00:11:25.635 fused_ordering(855) 00:11:25.635 fused_ordering(856) 00:11:25.635 fused_ordering(857) 00:11:25.635 fused_ordering(858) 00:11:25.635 fused_ordering(859) 00:11:25.635 fused_ordering(860) 00:11:25.635 fused_ordering(861) 00:11:25.635 fused_ordering(862) 00:11:25.636 fused_ordering(863) 00:11:25.636 fused_ordering(864) 00:11:25.636 fused_ordering(865) 00:11:25.636 fused_ordering(866) 00:11:25.636 fused_ordering(867) 00:11:25.636 fused_ordering(868) 00:11:25.636 fused_ordering(869) 00:11:25.636 fused_ordering(870) 00:11:25.636 fused_ordering(871) 00:11:25.636 fused_ordering(872) 00:11:25.636 fused_ordering(873) 00:11:25.636 fused_ordering(874) 00:11:25.636 fused_ordering(875) 00:11:25.636 fused_ordering(876) 00:11:25.636 fused_ordering(877) 00:11:25.636 fused_ordering(878) 00:11:25.636 fused_ordering(879) 00:11:25.636 fused_ordering(880) 00:11:25.636 fused_ordering(881) 00:11:25.636 fused_ordering(882) 00:11:25.636 fused_ordering(883) 00:11:25.636 fused_ordering(884) 00:11:25.636 fused_ordering(885) 00:11:25.636 fused_ordering(886) 00:11:25.636 fused_ordering(887) 00:11:25.636 fused_ordering(888) 00:11:25.636 fused_ordering(889) 00:11:25.636 fused_ordering(890) 00:11:25.636 fused_ordering(891) 00:11:25.636 fused_ordering(892) 00:11:25.636 fused_ordering(893) 00:11:25.636 fused_ordering(894) 00:11:25.636 fused_ordering(895) 00:11:25.636 fused_ordering(896) 00:11:25.636 fused_ordering(897) 00:11:25.636 fused_ordering(898) 00:11:25.636 fused_ordering(899) 00:11:25.636 fused_ordering(900) 00:11:25.636 fused_ordering(901) 00:11:25.636 fused_ordering(902) 00:11:25.636 fused_ordering(903) 00:11:25.636 fused_ordering(904) 00:11:25.636 fused_ordering(905) 00:11:25.636 fused_ordering(906) 00:11:25.636 fused_ordering(907) 00:11:25.636 fused_ordering(908) 00:11:25.636 fused_ordering(909) 00:11:25.636 fused_ordering(910) 00:11:25.636 fused_ordering(911) 00:11:25.636 fused_ordering(912) 00:11:25.636 fused_ordering(913) 00:11:25.636 fused_ordering(914) 00:11:25.636 fused_ordering(915) 00:11:25.636 fused_ordering(916) 00:11:25.636 fused_ordering(917) 00:11:25.636 fused_ordering(918) 00:11:25.636 fused_ordering(919) 00:11:25.636 fused_ordering(920) 00:11:25.636 fused_ordering(921) 00:11:25.636 fused_ordering(922) 00:11:25.636 fused_ordering(923) 00:11:25.636 fused_ordering(924) 00:11:25.636 fused_ordering(925) 00:11:25.636 fused_ordering(926) 00:11:25.636 fused_ordering(927) 00:11:25.636 fused_ordering(928) 00:11:25.636 fused_ordering(929) 00:11:25.636 fused_ordering(930) 00:11:25.636 fused_ordering(931) 00:11:25.636 fused_ordering(932) 00:11:25.636 fused_ordering(933) 00:11:25.636 fused_ordering(934) 00:11:25.636 fused_ordering(935) 00:11:25.636 fused_ordering(936) 00:11:25.636 fused_ordering(937) 00:11:25.636 fused_ordering(938) 00:11:25.636 fused_ordering(939) 00:11:25.636 fused_ordering(940) 00:11:25.636 fused_ordering(941) 00:11:25.636 fused_ordering(942) 00:11:25.636 fused_ordering(943) 00:11:25.636 fused_ordering(944) 00:11:25.636 fused_ordering(945) 00:11:25.636 fused_ordering(946) 00:11:25.636 fused_ordering(947) 00:11:25.636 fused_ordering(948) 00:11:25.636 fused_ordering(949) 00:11:25.636 fused_ordering(950) 00:11:25.636 fused_ordering(951) 00:11:25.636 fused_ordering(952) 00:11:25.636 fused_ordering(953) 00:11:25.636 fused_ordering(954) 00:11:25.636 fused_ordering(955) 00:11:25.636 fused_ordering(956) 00:11:25.636 fused_ordering(957) 00:11:25.636 fused_ordering(958) 00:11:25.636 fused_ordering(959) 00:11:25.636 fused_ordering(960) 00:11:25.636 fused_ordering(961) 00:11:25.636 fused_ordering(962) 00:11:25.636 fused_ordering(963) 00:11:25.636 fused_ordering(964) 00:11:25.636 fused_ordering(965) 00:11:25.636 fused_ordering(966) 00:11:25.636 fused_ordering(967) 00:11:25.636 fused_ordering(968) 00:11:25.636 fused_ordering(969) 00:11:25.636 fused_ordering(970) 00:11:25.636 fused_ordering(971) 00:11:25.636 fused_ordering(972) 00:11:25.636 fused_ordering(973) 00:11:25.636 fused_ordering(974) 00:11:25.636 fused_ordering(975) 00:11:25.636 fused_ordering(976) 00:11:25.636 fused_ordering(977) 00:11:25.636 fused_ordering(978) 00:11:25.636 fused_ordering(979) 00:11:25.636 fused_ordering(980) 00:11:25.636 fused_ordering(981) 00:11:25.636 fused_ordering(982) 00:11:25.636 fused_ordering(983) 00:11:25.636 fused_ordering(984) 00:11:25.636 fused_ordering(985) 00:11:25.636 fused_ordering(986) 00:11:25.636 fused_ordering(987) 00:11:25.636 fused_ordering(988) 00:11:25.636 fused_ordering(989) 00:11:25.636 fused_ordering(990) 00:11:25.636 fused_ordering(991) 00:11:25.636 fused_ordering(992) 00:11:25.636 fused_ordering(993) 00:11:25.636 fused_ordering(994) 00:11:25.636 fused_ordering(995) 00:11:25.636 fused_ordering(996) 00:11:25.636 fused_ordering(997) 00:11:25.636 fused_ordering(998) 00:11:25.636 fused_ordering(999) 00:11:25.636 fused_ordering(1000) 00:11:25.636 fused_ordering(1001) 00:11:25.636 fused_ordering(1002) 00:11:25.636 fused_ordering(1003) 00:11:25.636 fused_ordering(1004) 00:11:25.636 fused_ordering(1005) 00:11:25.636 fused_ordering(1006) 00:11:25.636 fused_ordering(1007) 00:11:25.636 fused_ordering(1008) 00:11:25.636 fused_ordering(1009) 00:11:25.636 fused_ordering(1010) 00:11:25.636 fused_ordering(1011) 00:11:25.636 fused_ordering(1012) 00:11:25.636 fused_ordering(1013) 00:11:25.636 fused_ordering(1014) 00:11:25.636 fused_ordering(1015) 00:11:25.636 fused_ordering(1016) 00:11:25.636 fused_ordering(1017) 00:11:25.636 fused_ordering(1018) 00:11:25.636 fused_ordering(1019) 00:11:25.636 fused_ordering(1020) 00:11:25.636 fused_ordering(1021) 00:11:25.636 fused_ordering(1022) 00:11:25.636 fused_ordering(1023) 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.636 rmmod nvme_tcp 00:11:25.636 rmmod nvme_fabrics 00:11:25.636 rmmod nvme_keyring 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1476935 ']' 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1476935 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1476935 ']' 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1476935 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.636 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476935 00:11:25.896 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476935' 00:11:25.897 killing process with pid 1476935 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1476935 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1476935 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.897 10:18:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.437 00:11:28.437 real 0m8.035s 00:11:28.437 user 0m6.609s 00:11:28.437 sys 0m3.184s 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.437 ************************************ 00:11:28.437 END TEST nvmf_fused_ordering 00:11:28.437 ************************************ 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.437 ************************************ 00:11:28.437 START TEST nvmf_ns_masking 00:11:28.437 ************************************ 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.437 * Looking for test storage... 00:11:28.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.437 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=80607e30-346b-43da-8267-59c803ede770 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e67a8a01-d230-4cb7-9d5c-bfca25956c35 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=694f391a-e133-40a4-8ae3-823e3973e0b9 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.438 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:29.819 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:29.819 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:29.819 Found net devices under 0000:08:00.0: cvl_0_0 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:29.819 Found net devices under 0000:08:00.1: cvl_0_1 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.819 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:11:29.820 00:11:29.820 --- 10.0.0.2 ping statistics --- 00:11:29.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.820 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:11:29.820 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:11:30.079 00:11:30.079 --- 10.0.0.1 ping statistics --- 00:11:30.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.079 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1478761 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1478761 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1478761 ']' 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.079 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.080 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.080 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.080 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.080 [2024-07-25 10:18:19.680635] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:30.080 [2024-07-25 10:18:19.680738] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.080 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.080 [2024-07-25 10:18:19.746459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.338 [2024-07-25 10:18:19.862037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.338 [2024-07-25 10:18:19.862093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.338 [2024-07-25 10:18:19.862109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.338 [2024-07-25 10:18:19.862122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.338 [2024-07-25 10:18:19.862134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.338 [2024-07-25 10:18:19.862172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.338 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:30.597 [2024-07-25 10:18:20.275180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.597 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:30.597 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:30.597 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:30.855 Malloc1 00:11:30.855 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:31.420 Malloc2 00:11:31.420 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.678 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:31.935 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.193 [2024-07-25 10:18:21.791862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 694f391a-e133-40a4-8ae3-823e3973e0b9 -a 10.0.0.2 -s 4420 -i 4 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:32.193 10:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.720 10:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.720 [ 0]:0x1 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dd3fbf4b8a44842a0ad1ad379417b68 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dd3fbf4b8a44842a0ad1ad379417b68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.720 [ 0]:0x1 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dd3fbf4b8a44842a0ad1ad379417b68 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dd3fbf4b8a44842a0ad1ad379417b68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:34.720 [ 1]:0x2 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:34.720 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.978 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.236 10:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:35.525 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:35.525 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 694f391a-e133-40a4-8ae3-823e3973e0b9 -a 10.0.0.2 -s 4420 -i 4 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:35.783 10:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.682 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:37.940 [ 0]:0x2 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.940 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.198 [ 0]:0x1 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dd3fbf4b8a44842a0ad1ad379417b68 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dd3fbf4b8a44842a0ad1ad379417b68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.198 [ 1]:0x2 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.198 10:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.456 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:38.456 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:38.456 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:38.456 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:38.456 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.457 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.714 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.715 [ 0]:0x2 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.715 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.973 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:38.973 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 694f391a-e133-40a4-8ae3-823e3973e0b9 -a 10.0.0.2 -s 4420 -i 4 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:39.231 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.132 [ 0]:0x1 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dd3fbf4b8a44842a0ad1ad379417b68 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dd3fbf4b8a44842a0ad1ad379417b68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.132 [ 1]:0x2 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.132 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.389 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:41.390 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.390 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.648 [ 0]:0x2 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:41.648 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.906 [2024-07-25 10:18:31.601707] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:41.906 request: 00:11:41.906 { 00:11:41.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.906 "nsid": 2, 00:11:41.906 "host": "nqn.2016-06.io.spdk:host1", 00:11:41.906 "method": "nvmf_ns_remove_host", 00:11:41.906 "req_id": 1 00:11:41.906 } 00:11:41.906 Got JSON-RPC error response 00:11:41.906 response: 00:11:41.906 { 00:11:41.906 "code": -32602, 00:11:41.906 "message": "Invalid parameters" 00:11:41.906 } 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.906 [ 0]:0x2 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.906 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.164 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da7f975a5082414bbe3f1ea93153cd35 00:11:42.164 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da7f975a5082414bbe3f1ea93153cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1480031 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1480031 /var/tmp/host.sock 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1480031 ']' 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:42.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.165 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:42.165 [2024-07-25 10:18:31.816718] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:42.165 [2024-07-25 10:18:31.816819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480031 ] 00:11:42.165 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.165 [2024-07-25 10:18:31.877837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.422 [2024-07-25 10:18:31.995635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.679 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.679 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:42.679 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.936 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.194 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 80607e30-346b-43da-8267-59c803ede770 00:11:43.194 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:43.194 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 80607E30346B43DA826759C803EDE770 -i 00:11:43.451 10:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e67a8a01-d230-4cb7-9d5c-bfca25956c35 00:11:43.451 10:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:43.451 10:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E67A8A01D2304CB79D5CBFCA25956C35 -i 00:11:43.709 10:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.966 10:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:44.530 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.530 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.788 nvme0n1 00:11:44.788 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:44.788 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:45.354 nvme1n2 00:11:45.354 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:45.354 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:45.354 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:45.354 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:45.354 10:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:45.612 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:45.612 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:45.612 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:45.612 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:45.869 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 80607e30-346b-43da-8267-59c803ede770 == \8\0\6\0\7\e\3\0\-\3\4\6\b\-\4\3\d\a\-\8\2\6\7\-\5\9\c\8\0\3\e\d\e\7\7\0 ]] 00:11:45.869 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:45.869 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:45.869 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e67a8a01-d230-4cb7-9d5c-bfca25956c35 == \e\6\7\a\8\a\0\1\-\d\2\3\0\-\4\c\b\7\-\9\d\5\c\-\b\f\c\a\2\5\9\5\6\c\3\5 ]] 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1480031 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1480031 ']' 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1480031 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1480031 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1480031' 00:11:46.143 killing process with pid 1480031 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1480031 00:11:46.143 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1480031 00:11:46.413 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.671 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.671 rmmod nvme_tcp 00:11:46.671 rmmod nvme_fabrics 00:11:46.930 rmmod nvme_keyring 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1478761 ']' 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1478761 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1478761 ']' 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1478761 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478761 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478761' 00:11:46.930 killing process with pid 1478761 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1478761 00:11:46.930 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1478761 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.190 10:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.098 00:11:49.098 real 0m21.047s 00:11:49.098 user 0m28.886s 00:11:49.098 sys 0m3.817s 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:49.098 ************************************ 00:11:49.098 END TEST nvmf_ns_masking 00:11:49.098 ************************************ 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.098 ************************************ 00:11:49.098 START TEST nvmf_nvme_cli 00:11:49.098 ************************************ 00:11:49.098 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.356 * Looking for test storage... 00:11:49.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.356 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:51.261 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:51.261 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:51.261 Found net devices under 0000:08:00.0: cvl_0_0 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:51.261 Found net devices under 0000:08:00.1: cvl_0_1 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.261 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:11:51.262 00:11:51.262 --- 10.0.0.2 ping statistics --- 00:11:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.262 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:11:51.262 00:11:51.262 --- 10.0.0.1 ping statistics --- 00:11:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.262 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1481978 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1481978 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1481978 ']' 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.262 10:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 [2024-07-25 10:18:40.731580] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:51.262 [2024-07-25 10:18:40.731681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.262 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.262 [2024-07-25 10:18:40.796285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.262 [2024-07-25 10:18:40.914754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.262 [2024-07-25 10:18:40.914818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.262 [2024-07-25 10:18:40.914834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.262 [2024-07-25 10:18:40.914847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.262 [2024-07-25 10:18:40.914858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.262 [2024-07-25 10:18:40.914964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.262 [2024-07-25 10:18:40.915044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.262 [2024-07-25 10:18:40.915093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.262 [2024-07-25 10:18:40.915096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.262 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.262 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:11:51.262 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.262 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.262 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 [2024-07-25 10:18:41.053732] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 Malloc0 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 Malloc1 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 [2024-07-25 10:18:41.130338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.520 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:11:51.520 00:11:51.520 Discovery Log Number of Records 2, Generation counter 2 00:11:51.520 =====Discovery Log Entry 0====== 00:11:51.520 trtype: tcp 00:11:51.521 adrfam: ipv4 00:11:51.521 subtype: current discovery subsystem 00:11:51.521 treq: not required 00:11:51.521 portid: 0 00:11:51.521 trsvcid: 4420 00:11:51.521 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:51.521 traddr: 10.0.0.2 00:11:51.521 eflags: explicit discovery connections, duplicate discovery information 00:11:51.521 sectype: none 00:11:51.521 =====Discovery Log Entry 1====== 00:11:51.521 trtype: tcp 00:11:51.521 adrfam: ipv4 00:11:51.521 subtype: nvme subsystem 00:11:51.521 treq: not required 00:11:51.521 portid: 0 00:11:51.521 trsvcid: 4420 00:11:51.521 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:51.521 traddr: 10.0.0.2 00:11:51.521 eflags: none 00:11:51.521 sectype: none 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:51.521 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:52.085 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:53.981 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:54.238 /dev/nvme0n1 ]] 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.238 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:54.511 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:54.770 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.771 rmmod nvme_tcp 00:11:54.771 rmmod nvme_fabrics 00:11:54.771 rmmod nvme_keyring 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1481978 ']' 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1481978 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1481978 ']' 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1481978 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481978 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481978' 00:11:54.771 killing process with pid 1481978 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1481978 00:11:54.771 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1481978 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.031 10:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.942 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.942 00:11:56.942 real 0m7.882s 00:11:56.942 user 0m15.174s 00:11:56.942 sys 0m1.965s 00:11:56.942 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.942 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.942 ************************************ 00:11:56.942 END TEST nvmf_nvme_cli 00:11:56.942 ************************************ 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.201 ************************************ 00:11:57.201 START TEST nvmf_vfio_user 00:11:57.201 ************************************ 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:57.201 * Looking for test storage... 00:11:57.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:57.201 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1482705 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1482705' 00:11:57.202 Process pid: 1482705 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1482705 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1482705 ']' 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.202 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:57.202 [2024-07-25 10:18:46.898258] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:11:57.202 [2024-07-25 10:18:46.898359] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.202 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.202 [2024-07-25 10:18:46.964090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.460 [2024-07-25 10:18:47.087044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.460 [2024-07-25 10:18:47.087112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.460 [2024-07-25 10:18:47.087127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.460 [2024-07-25 10:18:47.087141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.460 [2024-07-25 10:18:47.087152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.460 [2024-07-25 10:18:47.087214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.460 [2024-07-25 10:18:47.087679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.460 [2024-07-25 10:18:47.087756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.460 [2024-07-25 10:18:47.091498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.460 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.460 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:11:57.460 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:58.832 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:59.090 Malloc1 00:11:59.090 10:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:59.655 10:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:59.913 10:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:00.171 10:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.171 10:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:00.171 10:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.428 Malloc2 00:12:00.428 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:00.686 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:00.944 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:01.202 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:01.202 [2024-07-25 10:18:50.957290] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:12:01.202 [2024-07-25 10:18:50.957342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483039 ] 00:12:01.202 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.462 [2024-07-25 10:18:50.999537] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:01.462 [2024-07-25 10:18:51.002637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.462 [2024-07-25 10:18:51.002667] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9ab2593000 00:12:01.462 [2024-07-25 10:18:51.003631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.004628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.005629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.006633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.007640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.008649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.009648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.462 [2024-07-25 10:18:51.010654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.463 [2024-07-25 10:18:51.011664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.463 [2024-07-25 10:18:51.011686] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9ab2588000 00:12:01.463 [2024-07-25 10:18:51.013146] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.463 [2024-07-25 10:18:51.033398] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:01.463 [2024-07-25 10:18:51.033451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:01.463 [2024-07-25 10:18:51.038814] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.463 [2024-07-25 10:18:51.038877] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:01.463 [2024-07-25 10:18:51.038982] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:01.463 [2024-07-25 10:18:51.039011] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:01.463 [2024-07-25 10:18:51.039023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:01.463 [2024-07-25 10:18:51.039801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:01.463 [2024-07-25 10:18:51.039827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:01.463 [2024-07-25 10:18:51.039842] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:01.463 [2024-07-25 10:18:51.040804] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.463 [2024-07-25 10:18:51.040823] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:01.463 [2024-07-25 10:18:51.040838] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.041810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:01.463 [2024-07-25 10:18:51.041830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.042817] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:01.463 [2024-07-25 10:18:51.042838] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:01.463 [2024-07-25 10:18:51.042848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.042861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.042977] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:01.463 [2024-07-25 10:18:51.042987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.042996] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:01.463 [2024-07-25 10:18:51.043826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:01.463 [2024-07-25 10:18:51.044826] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:01.463 [2024-07-25 10:18:51.045835] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.463 [2024-07-25 10:18:51.046828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.463 [2024-07-25 10:18:51.046968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.463 [2024-07-25 10:18:51.047849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:01.463 [2024-07-25 10:18:51.047869] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.463 [2024-07-25 10:18:51.047879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.047907] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:01.463 [2024-07-25 10:18:51.047928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.047954] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.463 [2024-07-25 10:18:51.047964] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.463 [2024-07-25 10:18:51.047972] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.463 [2024-07-25 10:18:51.047991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:01.463 [2024-07-25 10:18:51.048084] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:01.463 [2024-07-25 10:18:51.048094] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:01.463 [2024-07-25 10:18:51.048103] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:01.463 [2024-07-25 10:18:51.048112] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:01.463 [2024-07-25 10:18:51.048122] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:01.463 [2024-07-25 10:18:51.048131] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:01.463 [2024-07-25 10:18:51.048140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:01.463 [2024-07-25 10:18:51.048223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.463 [2024-07-25 10:18:51.048239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.463 [2024-07-25 10:18:51.048252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.463 [2024-07-25 10:18:51.048267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.463 [2024-07-25 10:18:51.048277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:01.463 [2024-07-25 10:18:51.048336] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:01.463 [2024-07-25 10:18:51.048351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:01.463 [2024-07-25 10:18:51.048493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048529] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:01.463 [2024-07-25 10:18:51.048539] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:01.463 [2024-07-25 10:18:51.048546] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.463 [2024-07-25 10:18:51.048556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:01.463 [2024-07-25 10:18:51.048593] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:01.463 [2024-07-25 10:18:51.048614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:01.463 [2024-07-25 10:18:51.048644] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.463 [2024-07-25 10:18:51.048654] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.463 [2024-07-25 10:18:51.048660] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.463 [2024-07-25 10:18:51.048671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.463 [2024-07-25 10:18:51.048701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.048733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048763] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.464 [2024-07-25 10:18:51.048772] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.464 [2024-07-25 10:18:51.048779] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.464 [2024-07-25 10:18:51.048790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.048814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.048829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048898] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048907] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:01.464 [2024-07-25 10:18:51.048916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:01.464 [2024-07-25 10:18:51.048926] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:01.464 [2024-07-25 10:18:51.048954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.048974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.048996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049101] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:01.464 [2024-07-25 10:18:51.049112] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:01.464 [2024-07-25 10:18:51.049119] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:01.464 [2024-07-25 10:18:51.049126] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:01.464 [2024-07-25 10:18:51.049133] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:01.464 [2024-07-25 10:18:51.049144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:01.464 [2024-07-25 10:18:51.049157] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:01.464 [2024-07-25 10:18:51.049166] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:01.464 [2024-07-25 10:18:51.049173] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.464 [2024-07-25 10:18:51.049183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049196] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:01.464 [2024-07-25 10:18:51.049205] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.464 [2024-07-25 10:18:51.049211] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.464 [2024-07-25 10:18:51.049221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049235] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:01.464 [2024-07-25 10:18:51.049244] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:01.464 [2024-07-25 10:18:51.049251] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.464 [2024-07-25 10:18:51.049261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:01.464 [2024-07-25 10:18:51.049274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:01.464 [2024-07-25 10:18:51.049331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:01.464 ===================================================== 00:12:01.464 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.464 ===================================================== 00:12:01.464 Controller Capabilities/Features 00:12:01.464 ================================ 00:12:01.464 Vendor ID: 4e58 00:12:01.464 Subsystem Vendor ID: 4e58 00:12:01.464 Serial Number: SPDK1 00:12:01.464 Model Number: SPDK bdev Controller 00:12:01.464 Firmware Version: 24.09 00:12:01.464 Recommended Arb Burst: 6 00:12:01.464 IEEE OUI Identifier: 8d 6b 50 00:12:01.464 Multi-path I/O 00:12:01.464 May have multiple subsystem ports: Yes 00:12:01.464 May have multiple controllers: Yes 00:12:01.464 Associated with SR-IOV VF: No 00:12:01.464 Max Data Transfer Size: 131072 00:12:01.464 Max Number of Namespaces: 32 00:12:01.464 Max Number of I/O Queues: 127 00:12:01.464 NVMe Specification Version (VS): 1.3 00:12:01.464 NVMe Specification Version (Identify): 1.3 00:12:01.464 Maximum Queue Entries: 256 00:12:01.464 Contiguous Queues Required: Yes 00:12:01.464 Arbitration Mechanisms Supported 00:12:01.464 Weighted Round Robin: Not Supported 00:12:01.464 Vendor Specific: Not Supported 00:12:01.464 Reset Timeout: 15000 ms 00:12:01.464 Doorbell Stride: 4 bytes 00:12:01.464 NVM Subsystem Reset: Not Supported 00:12:01.464 Command Sets Supported 00:12:01.464 NVM Command Set: Supported 00:12:01.464 Boot Partition: Not Supported 00:12:01.464 Memory Page Size Minimum: 4096 bytes 00:12:01.464 Memory Page Size Maximum: 4096 bytes 00:12:01.464 Persistent Memory Region: Not Supported 00:12:01.464 Optional Asynchronous Events Supported 00:12:01.464 Namespace Attribute Notices: Supported 00:12:01.464 Firmware Activation Notices: Not Supported 00:12:01.464 ANA Change Notices: Not Supported 00:12:01.464 PLE Aggregate Log Change Notices: Not Supported 00:12:01.464 LBA Status Info Alert Notices: Not Supported 00:12:01.464 EGE Aggregate Log Change Notices: Not Supported 00:12:01.464 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.464 Zone Descriptor Change Notices: Not Supported 00:12:01.464 Discovery Log Change Notices: Not Supported 00:12:01.464 Controller Attributes 00:12:01.464 128-bit Host Identifier: Supported 00:12:01.464 Non-Operational Permissive Mode: Not Supported 00:12:01.464 NVM Sets: Not Supported 00:12:01.464 Read Recovery Levels: Not Supported 00:12:01.464 Endurance Groups: Not Supported 00:12:01.464 Predictable Latency Mode: Not Supported 00:12:01.464 Traffic Based Keep ALive: Not Supported 00:12:01.464 Namespace Granularity: Not Supported 00:12:01.464 SQ Associations: Not Supported 00:12:01.464 UUID List: Not Supported 00:12:01.464 Multi-Domain Subsystem: Not Supported 00:12:01.464 Fixed Capacity Management: Not Supported 00:12:01.464 Variable Capacity Management: Not Supported 00:12:01.464 Delete Endurance Group: Not Supported 00:12:01.464 Delete NVM Set: Not Supported 00:12:01.464 Extended LBA Formats Supported: Not Supported 00:12:01.464 Flexible Data Placement Supported: Not Supported 00:12:01.464 00:12:01.464 Controller Memory Buffer Support 00:12:01.464 ================================ 00:12:01.464 Supported: No 00:12:01.464 00:12:01.464 Persistent Memory Region Support 00:12:01.464 ================================ 00:12:01.464 Supported: No 00:12:01.464 00:12:01.464 Admin Command Set Attributes 00:12:01.464 ============================ 00:12:01.464 Security Send/Receive: Not Supported 00:12:01.464 Format NVM: Not Supported 00:12:01.464 Firmware Activate/Download: Not Supported 00:12:01.464 Namespace Management: Not Supported 00:12:01.464 Device Self-Test: Not Supported 00:12:01.464 Directives: Not Supported 00:12:01.464 NVMe-MI: Not Supported 00:12:01.464 Virtualization Management: Not Supported 00:12:01.464 Doorbell Buffer Config: Not Supported 00:12:01.464 Get LBA Status Capability: Not Supported 00:12:01.464 Command & Feature Lockdown Capability: Not Supported 00:12:01.465 Abort Command Limit: 4 00:12:01.465 Async Event Request Limit: 4 00:12:01.465 Number of Firmware Slots: N/A 00:12:01.465 Firmware Slot 1 Read-Only: N/A 00:12:01.465 Firmware Activation Without Reset: N/A 00:12:01.465 Multiple Update Detection Support: N/A 00:12:01.465 Firmware Update Granularity: No Information Provided 00:12:01.465 Per-Namespace SMART Log: No 00:12:01.465 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.465 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:01.465 Command Effects Log Page: Supported 00:12:01.465 Get Log Page Extended Data: Supported 00:12:01.465 Telemetry Log Pages: Not Supported 00:12:01.465 Persistent Event Log Pages: Not Supported 00:12:01.465 Supported Log Pages Log Page: May Support 00:12:01.465 Commands Supported & Effects Log Page: Not Supported 00:12:01.465 Feature Identifiers & Effects Log Page:May Support 00:12:01.465 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.465 Data Area 4 for Telemetry Log: Not Supported 00:12:01.465 Error Log Page Entries Supported: 128 00:12:01.465 Keep Alive: Supported 00:12:01.465 Keep Alive Granularity: 10000 ms 00:12:01.465 00:12:01.465 NVM Command Set Attributes 00:12:01.465 ========================== 00:12:01.465 Submission Queue Entry Size 00:12:01.465 Max: 64 00:12:01.465 Min: 64 00:12:01.465 Completion Queue Entry Size 00:12:01.465 Max: 16 00:12:01.465 Min: 16 00:12:01.465 Number of Namespaces: 32 00:12:01.465 Compare Command: Supported 00:12:01.465 Write Uncorrectable Command: Not Supported 00:12:01.465 Dataset Management Command: Supported 00:12:01.465 Write Zeroes Command: Supported 00:12:01.465 Set Features Save Field: Not Supported 00:12:01.465 Reservations: Not Supported 00:12:01.465 Timestamp: Not Supported 00:12:01.465 Copy: Supported 00:12:01.465 Volatile Write Cache: Present 00:12:01.465 Atomic Write Unit (Normal): 1 00:12:01.465 Atomic Write Unit (PFail): 1 00:12:01.465 Atomic Compare & Write Unit: 1 00:12:01.465 Fused Compare & Write: Supported 00:12:01.465 Scatter-Gather List 00:12:01.465 SGL Command Set: Supported (Dword aligned) 00:12:01.465 SGL Keyed: Not Supported 00:12:01.465 SGL Bit Bucket Descriptor: Not Supported 00:12:01.465 SGL Metadata Pointer: Not Supported 00:12:01.465 Oversized SGL: Not Supported 00:12:01.465 SGL Metadata Address: Not Supported 00:12:01.465 SGL Offset: Not Supported 00:12:01.465 Transport SGL Data Block: Not Supported 00:12:01.465 Replay Protected Memory Block: Not Supported 00:12:01.465 00:12:01.465 Firmware Slot Information 00:12:01.465 ========================= 00:12:01.465 Active slot: 1 00:12:01.465 Slot 1 Firmware Revision: 24.09 00:12:01.465 00:12:01.465 00:12:01.465 Commands Supported and Effects 00:12:01.465 ============================== 00:12:01.465 Admin Commands 00:12:01.465 -------------- 00:12:01.465 Get Log Page (02h): Supported 00:12:01.465 Identify (06h): Supported 00:12:01.465 Abort (08h): Supported 00:12:01.465 Set Features (09h): Supported 00:12:01.465 Get Features (0Ah): Supported 00:12:01.465 Asynchronous Event Request (0Ch): Supported 00:12:01.465 Keep Alive (18h): Supported 00:12:01.465 I/O Commands 00:12:01.465 ------------ 00:12:01.465 Flush (00h): Supported LBA-Change 00:12:01.465 Write (01h): Supported LBA-Change 00:12:01.465 Read (02h): Supported 00:12:01.465 Compare (05h): Supported 00:12:01.465 Write Zeroes (08h): Supported LBA-Change 00:12:01.465 Dataset Management (09h): Supported LBA-Change 00:12:01.465 Copy (19h): Supported LBA-Change 00:12:01.465 00:12:01.465 Error Log 00:12:01.465 ========= 00:12:01.465 00:12:01.465 Arbitration 00:12:01.465 =========== 00:12:01.465 Arbitration Burst: 1 00:12:01.465 00:12:01.465 Power Management 00:12:01.465 ================ 00:12:01.465 Number of Power States: 1 00:12:01.465 Current Power State: Power State #0 00:12:01.465 Power State #0: 00:12:01.465 Max Power: 0.00 W 00:12:01.465 Non-Operational State: Operational 00:12:01.465 Entry Latency: Not Reported 00:12:01.465 Exit Latency: Not Reported 00:12:01.465 Relative Read Throughput: 0 00:12:01.465 Relative Read Latency: 0 00:12:01.465 Relative Write Throughput: 0 00:12:01.465 Relative Write Latency: 0 00:12:01.465 Idle Power: Not Reported 00:12:01.465 Active Power: Not Reported 00:12:01.465 Non-Operational Permissive Mode: Not Supported 00:12:01.465 00:12:01.465 Health Information 00:12:01.465 ================== 00:12:01.465 Critical Warnings: 00:12:01.465 Available Spare Space: OK 00:12:01.465 Temperature: OK 00:12:01.465 Device Reliability: OK 00:12:01.465 Read Only: No 00:12:01.465 Volatile Memory Backup: OK 00:12:01.465 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:01.465 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:01.465 Available Spare: 0% 00:12:01.465 Available Sp[2024-07-25 10:18:51.049474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:01.465 [2024-07-25 10:18:51.049508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:01.465 [2024-07-25 10:18:51.049574] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:01.465 [2024-07-25 10:18:51.049594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.465 [2024-07-25 10:18:51.049608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.465 [2024-07-25 10:18:51.049621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.465 [2024-07-25 10:18:51.049632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.465 [2024-07-25 10:18:51.053493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.465 [2024-07-25 10:18:51.053518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:01.465 [2024-07-25 10:18:51.053879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.465 [2024-07-25 10:18:51.053960] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:01.465 [2024-07-25 10:18:51.053975] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:01.465 [2024-07-25 10:18:51.054892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:01.465 [2024-07-25 10:18:51.054917] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:01.465 [2024-07-25 10:18:51.054992] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:01.465 [2024-07-25 10:18:51.056932] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.465 are Threshold: 0% 00:12:01.465 Life Percentage Used: 0% 00:12:01.465 Data Units Read: 0 00:12:01.465 Data Units Written: 0 00:12:01.465 Host Read Commands: 0 00:12:01.465 Host Write Commands: 0 00:12:01.465 Controller Busy Time: 0 minutes 00:12:01.465 Power Cycles: 0 00:12:01.465 Power On Hours: 0 hours 00:12:01.465 Unsafe Shutdowns: 0 00:12:01.465 Unrecoverable Media Errors: 0 00:12:01.465 Lifetime Error Log Entries: 0 00:12:01.465 Warning Temperature Time: 0 minutes 00:12:01.465 Critical Temperature Time: 0 minutes 00:12:01.465 00:12:01.465 Number of Queues 00:12:01.465 ================ 00:12:01.465 Number of I/O Submission Queues: 127 00:12:01.465 Number of I/O Completion Queues: 127 00:12:01.465 00:12:01.465 Active Namespaces 00:12:01.465 ================= 00:12:01.465 Namespace ID:1 00:12:01.465 Error Recovery Timeout: Unlimited 00:12:01.465 Command Set Identifier: NVM (00h) 00:12:01.465 Deallocate: Supported 00:12:01.465 Deallocated/Unwritten Error: Not Supported 00:12:01.465 Deallocated Read Value: Unknown 00:12:01.465 Deallocate in Write Zeroes: Not Supported 00:12:01.465 Deallocated Guard Field: 0xFFFF 00:12:01.465 Flush: Supported 00:12:01.465 Reservation: Supported 00:12:01.465 Namespace Sharing Capabilities: Multiple Controllers 00:12:01.465 Size (in LBAs): 131072 (0GiB) 00:12:01.465 Capacity (in LBAs): 131072 (0GiB) 00:12:01.465 Utilization (in LBAs): 131072 (0GiB) 00:12:01.465 NGUID: D6CC3CDEFABA477FAA975D235D8EF32D 00:12:01.465 UUID: d6cc3cde-faba-477f-aa97-5d235d8ef32d 00:12:01.465 Thin Provisioning: Not Supported 00:12:01.465 Per-NS Atomic Units: Yes 00:12:01.465 Atomic Boundary Size (Normal): 0 00:12:01.465 Atomic Boundary Size (PFail): 0 00:12:01.465 Atomic Boundary Offset: 0 00:12:01.465 Maximum Single Source Range Length: 65535 00:12:01.465 Maximum Copy Length: 65535 00:12:01.465 Maximum Source Range Count: 1 00:12:01.465 NGUID/EUI64 Never Reused: No 00:12:01.465 Namespace Write Protected: No 00:12:01.465 Number of LBA Formats: 1 00:12:01.466 Current LBA Format: LBA Format #00 00:12:01.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:01.466 00:12:01.466 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:01.466 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.724 [2024-07-25 10:18:51.298601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.985 Initializing NVMe Controllers 00:12:06.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:06.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:06.985 Initialization complete. Launching workers. 00:12:06.985 ======================================================== 00:12:06.985 Latency(us) 00:12:06.985 Device Information : IOPS MiB/s Average min max 00:12:06.985 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24064.77 94.00 5318.93 1486.04 10522.29 00:12:06.985 ======================================================== 00:12:06.985 Total : 24064.77 94.00 5318.93 1486.04 10522.29 00:12:06.985 00:12:06.985 [2024-07-25 10:18:56.323494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.985 10:18:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:06.985 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.986 [2024-07-25 10:18:56.555693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.249 Initializing NVMe Controllers 00:12:12.249 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.249 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.249 Initialization complete. Launching workers. 00:12:12.249 ======================================================== 00:12:12.249 Latency(us) 00:12:12.249 Device Information : IOPS MiB/s Average min max 00:12:12.249 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16007.33 62.53 7995.54 6985.30 15981.75 00:12:12.249 ======================================================== 00:12:12.250 Total : 16007.33 62.53 7995.54 6985.30 15981.75 00:12:12.250 00:12:12.250 [2024-07-25 10:19:01.589333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.250 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:12.250 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.250 [2024-07-25 10:19:01.825529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.515 [2024-07-25 10:19:06.889761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.515 Initializing NVMe Controllers 00:12:17.515 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.515 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.515 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:17.515 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:17.515 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:17.515 Initialization complete. Launching workers. 00:12:17.515 Starting thread on core 2 00:12:17.515 Starting thread on core 3 00:12:17.515 Starting thread on core 1 00:12:17.515 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:17.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.515 [2024-07-25 10:19:07.185950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:20.799 [2024-07-25 10:19:10.240705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:20.799 Initializing NVMe Controllers 00:12:20.799 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.799 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.799 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:20.799 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:20.799 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:20.799 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:20.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:20.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:20.799 Initialization complete. Launching workers. 00:12:20.799 Starting thread on core 1 with urgent priority queue 00:12:20.799 Starting thread on core 2 with urgent priority queue 00:12:20.799 Starting thread on core 3 with urgent priority queue 00:12:20.799 Starting thread on core 0 with urgent priority queue 00:12:20.799 SPDK bdev Controller (SPDK1 ) core 0: 4022.33 IO/s 24.86 secs/100000 ios 00:12:20.799 SPDK bdev Controller (SPDK1 ) core 1: 4054.33 IO/s 24.66 secs/100000 ios 00:12:20.799 SPDK bdev Controller (SPDK1 ) core 2: 3442.00 IO/s 29.05 secs/100000 ios 00:12:20.799 SPDK bdev Controller (SPDK1 ) core 3: 3625.67 IO/s 27.58 secs/100000 ios 00:12:20.799 ======================================================== 00:12:20.799 00:12:20.799 10:19:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:20.799 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.799 [2024-07-25 10:19:10.529006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:20.799 Initializing NVMe Controllers 00:12:20.799 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.799 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.799 Namespace ID: 1 size: 0GB 00:12:20.799 Initialization complete. 00:12:20.799 INFO: using host memory buffer for IO 00:12:20.799 Hello world! 00:12:20.799 [2024-07-25 10:19:10.563712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.057 10:19:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:21.057 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.315 [2024-07-25 10:19:10.842967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.248 Initializing NVMe Controllers 00:12:22.248 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.248 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.248 Initialization complete. Launching workers. 00:12:22.248 submit (in ns) avg, min, max = 8173.3, 4494.8, 4019306.7 00:12:22.248 complete (in ns) avg, min, max = 30146.7, 2641.5, 4018490.4 00:12:22.248 00:12:22.248 Submit histogram 00:12:22.248 ================ 00:12:22.248 Range in us Cumulative Count 00:12:22.248 4.480 - 4.504: 0.0600% ( 7) 00:12:22.248 4.504 - 4.527: 0.7199% ( 77) 00:12:22.248 4.527 - 4.551: 2.6823% ( 229) 00:12:22.248 4.551 - 4.575: 6.8386% ( 485) 00:12:22.248 4.575 - 4.599: 12.8203% ( 698) 00:12:22.248 4.599 - 4.622: 17.2337% ( 515) 00:12:22.248 4.622 - 4.646: 19.6504% ( 282) 00:12:22.248 4.646 - 4.670: 20.6273% ( 114) 00:12:22.248 4.670 - 4.693: 21.2529% ( 73) 00:12:22.248 4.693 - 4.717: 22.4869% ( 144) 00:12:22.248 4.717 - 4.741: 25.1950% ( 316) 00:12:22.248 4.741 - 4.764: 31.7594% ( 766) 00:12:22.248 4.764 - 4.788: 38.7951% ( 821) 00:12:22.248 4.788 - 4.812: 45.1967% ( 747) 00:12:22.248 4.812 - 4.836: 47.6476% ( 286) 00:12:22.248 4.836 - 4.859: 48.2046% ( 65) 00:12:22.248 4.859 - 4.883: 48.8045% ( 70) 00:12:22.248 4.883 - 4.907: 49.7386% ( 109) 00:12:22.248 4.907 - 4.930: 50.7841% ( 122) 00:12:22.248 4.930 - 4.954: 52.6095% ( 213) 00:12:22.248 4.954 - 4.978: 54.2120% ( 187) 00:12:22.248 4.978 - 5.001: 55.7717% ( 182) 00:12:22.248 5.001 - 5.025: 56.7572% ( 115) 00:12:22.248 5.025 - 5.049: 57.4685% ( 83) 00:12:22.248 5.049 - 5.073: 57.7856% ( 37) 00:12:22.248 5.073 - 5.096: 57.9313% ( 17) 00:12:22.248 5.096 - 5.120: 58.0512% ( 14) 00:12:22.248 5.120 - 5.144: 58.4883% ( 51) 00:12:22.248 5.144 - 5.167: 60.1508% ( 194) 00:12:22.248 5.167 - 5.191: 63.9301% ( 441) 00:12:22.248 5.191 - 5.215: 67.8464% ( 457) 00:12:22.248 5.215 - 5.239: 70.4173% ( 300) 00:12:22.248 5.239 - 5.262: 71.1372% ( 84) 00:12:22.248 5.262 - 5.286: 71.6942% ( 65) 00:12:22.248 5.286 - 5.310: 72.3627% ( 78) 00:12:22.248 5.310 - 5.333: 74.9507% ( 302) 00:12:22.248 5.333 - 5.357: 76.9560% ( 234) 00:12:22.248 5.357 - 5.381: 79.2613% ( 269) 00:12:22.248 5.381 - 5.404: 80.1868% ( 108) 00:12:22.248 5.404 - 5.428: 80.9924% ( 94) 00:12:22.248 5.428 - 5.452: 81.9950% ( 117) 00:12:22.248 5.452 - 5.476: 82.5264% ( 62) 00:12:22.248 5.476 - 5.499: 82.6892% ( 19) 00:12:22.248 5.499 - 5.523: 82.8006% ( 13) 00:12:22.248 5.523 - 5.547: 83.5976% ( 93) 00:12:22.248 5.547 - 5.570: 87.2654% ( 428) 00:12:22.248 5.570 - 5.594: 91.3360% ( 475) 00:12:22.248 5.594 - 5.618: 94.8410% ( 409) 00:12:22.248 5.618 - 5.641: 95.5609% ( 84) 00:12:22.248 5.641 - 5.665: 95.9551% ( 46) 00:12:22.248 5.665 - 5.689: 96.1179% ( 19) 00:12:22.249 5.689 - 5.713: 96.2122% ( 11) 00:12:22.249 5.713 - 5.736: 96.2979% ( 10) 00:12:22.249 5.736 - 5.760: 96.3322% ( 4) 00:12:22.249 5.760 - 5.784: 96.4693% ( 16) 00:12:22.249 5.784 - 5.807: 96.5721% ( 12) 00:12:22.249 5.807 - 5.831: 96.6835% ( 13) 00:12:22.249 5.831 - 5.855: 96.7778% ( 11) 00:12:22.249 5.855 - 5.879: 96.8806% ( 12) 00:12:22.249 5.879 - 5.902: 96.9406% ( 7) 00:12:22.249 5.902 - 5.926: 97.0520% ( 13) 00:12:22.249 5.926 - 5.950: 97.1291% ( 9) 00:12:22.249 5.950 - 5.973: 97.1720% ( 5) 00:12:22.249 5.973 - 5.997: 97.2148% ( 5) 00:12:22.249 5.997 - 6.021: 97.2663% ( 6) 00:12:22.249 6.021 - 6.044: 97.3348% ( 8) 00:12:22.249 6.068 - 6.116: 97.3862% ( 6) 00:12:22.249 6.116 - 6.163: 97.4034% ( 2) 00:12:22.249 6.163 - 6.210: 97.4462% ( 5) 00:12:22.249 6.210 - 6.258: 97.4805% ( 4) 00:12:22.249 6.258 - 6.305: 97.5662% ( 10) 00:12:22.249 6.305 - 6.353: 97.6947% ( 15) 00:12:22.249 6.353 - 6.400: 97.7547% ( 7) 00:12:22.249 6.447 - 6.495: 97.8919% ( 16) 00:12:22.249 6.495 - 6.542: 97.9690% ( 9) 00:12:22.249 6.542 - 6.590: 98.0204% ( 6) 00:12:22.249 6.590 - 6.637: 98.0461% ( 3) 00:12:22.249 6.637 - 6.684: 98.0890% ( 5) 00:12:22.249 6.684 - 6.732: 98.0975% ( 1) 00:12:22.249 6.732 - 6.779: 98.1147% ( 2) 00:12:22.249 6.779 - 6.827: 98.1575% ( 5) 00:12:22.249 6.827 - 6.874: 98.2261% ( 8) 00:12:22.249 6.874 - 6.921: 98.6288% ( 47) 00:12:22.249 6.921 - 6.969: 98.8602% ( 27) 00:12:22.249 6.969 - 7.016: 99.0659% ( 24) 00:12:22.249 7.016 - 7.064: 99.1345% ( 8) 00:12:22.249 7.064 - 7.111: 99.1516% ( 2) 00:12:22.249 7.111 - 7.159: 99.1602% ( 1) 00:12:22.249 7.396 - 7.443: 99.1859% ( 3) 00:12:22.249 7.490 - 7.538: 99.1944% ( 1) 00:12:22.249 7.964 - 8.012: 99.2030% ( 1) 00:12:22.249 8.107 - 8.154: 99.2116% ( 1) 00:12:22.249 8.201 - 8.249: 99.2202% ( 1) 00:12:22.249 8.249 - 8.296: 99.2287% ( 1) 00:12:22.249 8.439 - 8.486: 99.2373% ( 1) 00:12:22.249 8.533 - 8.581: 99.2459% ( 1) 00:12:22.249 8.723 - 8.770: 99.2544% ( 1) 00:12:22.249 8.818 - 8.865: 99.2630% ( 1) 00:12:22.249 8.913 - 8.960: 99.2716% ( 1) 00:12:22.249 8.960 - 9.007: 99.2887% ( 2) 00:12:22.249 9.007 - 9.055: 99.2973% ( 1) 00:12:22.249 9.102 - 9.150: 99.3230% ( 3) 00:12:22.249 9.387 - 9.434: 99.3316% ( 1) 00:12:22.249 9.434 - 9.481: 99.3487% ( 2) 00:12:22.249 9.624 - 9.671: 99.3744% ( 3) 00:12:22.249 9.719 - 9.766: 99.3830% ( 1) 00:12:22.249 9.813 - 9.861: 99.3916% ( 1) 00:12:22.249 9.908 - 9.956: 99.4087% ( 2) 00:12:22.249 9.956 - 10.003: 99.4173% ( 1) 00:12:22.249 10.003 - 10.050: 99.4258% ( 1) 00:12:22.249 10.050 - 10.098: 99.4430% ( 2) 00:12:22.249 10.098 - 10.145: 99.4515% ( 1) 00:12:22.249 10.193 - 10.240: 99.4687% ( 2) 00:12:22.249 10.240 - 10.287: 99.4772% ( 1) 00:12:22.249 10.335 - 10.382: 99.4858% ( 1) 00:12:22.249 10.477 - 10.524: 99.4944% ( 1) 00:12:22.249 10.572 - 10.619: 99.5030% ( 1) 00:12:22.249 10.714 - 10.761: 99.5201% ( 2) 00:12:22.249 10.761 - 10.809: 99.5372% ( 2) 00:12:22.249 10.809 - 10.856: 99.5458% ( 1) 00:12:22.249 10.999 - 11.046: 99.5544% ( 1) 00:12:22.249 11.141 - 11.188: 99.5629% ( 1) 00:12:22.249 11.188 - 11.236: 99.5715% ( 1) 00:12:22.249 11.283 - 11.330: 99.5972% ( 3) 00:12:22.249 11.330 - 11.378: 99.6058% ( 1) 00:12:22.249 11.662 - 11.710: 99.6144% ( 1) 00:12:22.249 11.710 - 11.757: 99.6229% ( 1) 00:12:22.249 11.947 - 11.994: 99.6315% ( 1) 00:12:22.249 11.994 - 12.041: 99.6401% ( 1) 00:12:22.249 12.041 - 12.089: 99.6486% ( 1) 00:12:22.249 12.326 - 12.421: 99.6572% ( 1) 00:12:22.249 12.610 - 12.705: 99.6658% ( 1) 00:12:22.249 12.705 - 12.800: 99.6744% ( 1) 00:12:22.249 12.990 - 13.084: 99.6829% ( 1) 00:12:22.249 13.084 - 13.179: 99.7001% ( 2) 00:12:22.249 13.179 - 13.274: 99.7172% ( 2) 00:12:22.249 13.369 - 13.464: 99.7258% ( 1) 00:12:22.249 13.559 - 13.653: 99.7343% ( 1) 00:12:22.249 13.653 - 13.748: 99.7429% ( 1) 00:12:22.249 13.748 - 13.843: 99.7686% ( 3) 00:12:22.249 13.843 - 13.938: 99.8115% ( 5) 00:12:22.249 13.938 - 14.033: 99.8629% ( 6) 00:12:22.249 14.033 - 14.127: 99.8715% ( 1) 00:12:22.249 14.127 - 14.222: 99.8886% ( 2) 00:12:22.249 14.222 - 14.317: 99.8972% ( 1) 00:12:22.249 15.265 - 15.360: 99.9057% ( 1) 00:12:22.249 16.403 - 16.498: 99.9143% ( 1) 00:12:22.249 18.299 - 18.394: 99.9229% ( 1) 00:12:22.249 4004.978 - 4029.250: 100.0000% ( 9) 00:12:22.249 00:12:22.249 Complete histogram 00:12:22.249 ================== 00:12:22.249 Range in us Cumulative Count 00:12:22.249 2.631 - 2.643: 0.0086% ( 1) 00:12:22.249 2.643 - 2.655: 1.5511% ( 180) 00:12:22.249 2.655 - 2.667: 24.1495% ( 2637) 00:12:22.249 2.667 - 2.679: 49.7729% ( 2990) 00:12:22.249 2.679 - 2.690: 54.9062% ( 599) 00:12:22.249 2.690 - 2.702: 65.8154% ( 1273) 00:12:22.249 2.702 - 2.714: 82.0122% ( 1890) 00:12:22.249 2.714 - 2.726: 88.7994% ( 792) 00:12:22.249 2.726 - 2.738: 92.4844% ( 430) 00:12:22.249 2.738 - 2.750: 94.9524% ( 288) 00:12:22.249 2.750 - 2.761: 96.1779% ( 143) 00:12:22.249 2.761 - 2.773: 96.9235% ( 87) 00:12:22.249 2.773 - 2.785: 97.3262% ( 47) 00:12:22.249 2.785 - 2.797: 97.6862% ( 42) 00:12:22.249 2.797 - 2.809: 97.8490% ( 19) 00:12:22.249 2.809 - 2.8[2024-07-25 10:19:11.866206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.249 21: 97.9261% ( 9) 00:12:22.249 2.821 - 2.833: 98.0118% ( 10) 00:12:22.249 2.833 - 2.844: 98.0632% ( 6) 00:12:22.249 2.844 - 2.856: 98.1061% ( 5) 00:12:22.249 2.856 - 2.868: 98.1318% ( 3) 00:12:22.249 2.868 - 2.880: 98.1918% ( 7) 00:12:22.249 2.880 - 2.892: 98.2004% ( 1) 00:12:22.249 2.892 - 2.904: 98.2518% ( 6) 00:12:22.249 2.904 - 2.916: 98.2689% ( 2) 00:12:22.249 2.951 - 2.963: 98.2775% ( 1) 00:12:22.249 2.963 - 2.975: 98.2861% ( 1) 00:12:22.249 2.975 - 2.987: 98.2946% ( 1) 00:12:22.249 2.987 - 2.999: 98.3032% ( 1) 00:12:22.249 2.999 - 3.010: 98.3118% ( 1) 00:12:22.249 3.010 - 3.022: 98.3203% ( 1) 00:12:22.249 3.034 - 3.058: 98.3460% ( 3) 00:12:22.249 3.058 - 3.081: 98.3546% ( 1) 00:12:22.249 3.129 - 3.153: 98.3718% ( 2) 00:12:22.249 3.153 - 3.176: 98.3803% ( 1) 00:12:22.249 3.247 - 3.271: 98.4146% ( 4) 00:12:22.249 3.271 - 3.295: 98.4403% ( 3) 00:12:22.249 3.295 - 3.319: 98.4660% ( 3) 00:12:22.249 3.319 - 3.342: 98.4832% ( 2) 00:12:22.249 3.342 - 3.366: 98.5346% ( 6) 00:12:22.249 3.366 - 3.390: 98.6031% ( 8) 00:12:22.249 3.390 - 3.413: 98.6717% ( 8) 00:12:22.249 3.413 - 3.437: 98.6974% ( 3) 00:12:22.249 3.437 - 3.461: 98.7317% ( 4) 00:12:22.249 3.461 - 3.484: 98.7574% ( 3) 00:12:22.249 3.484 - 3.508: 98.8088% ( 6) 00:12:22.249 3.508 - 3.532: 98.8602% ( 6) 00:12:22.249 3.532 - 3.556: 98.8945% ( 4) 00:12:22.249 3.556 - 3.579: 98.9202% ( 3) 00:12:22.249 3.579 - 3.603: 98.9288% ( 1) 00:12:22.249 3.603 - 3.627: 98.9459% ( 2) 00:12:22.250 3.627 - 3.650: 98.9631% ( 2) 00:12:22.250 3.721 - 3.745: 98.9716% ( 1) 00:12:22.250 3.887 - 3.911: 98.9802% ( 1) 00:12:22.250 4.030 - 4.053: 98.9888% ( 1) 00:12:22.250 4.148 - 4.172: 98.9973% ( 1) 00:12:22.250 4.243 - 4.267: 99.0059% ( 1) 00:12:22.250 4.314 - 4.338: 99.0145% ( 1) 00:12:22.250 4.338 - 4.361: 99.0231% ( 1) 00:12:22.250 4.527 - 4.551: 99.0316% ( 1) 00:12:22.250 6.116 - 6.163: 99.0402% ( 1) 00:12:22.250 6.210 - 6.258: 99.0488% ( 1) 00:12:22.250 6.258 - 6.305: 99.0573% ( 1) 00:12:22.250 6.305 - 6.353: 99.0659% ( 1) 00:12:22.250 6.447 - 6.495: 99.0745% ( 1) 00:12:22.250 6.732 - 6.779: 99.0830% ( 1) 00:12:22.250 6.779 - 6.827: 99.0916% ( 1) 00:12:22.250 6.874 - 6.921: 99.1002% ( 1) 00:12:22.250 6.969 - 7.016: 99.1087% ( 1) 00:12:22.250 7.111 - 7.159: 99.1173% ( 1) 00:12:22.250 7.348 - 7.396: 99.1259% ( 1) 00:12:22.250 7.443 - 7.490: 99.1345% ( 1) 00:12:22.250 7.585 - 7.633: 99.1430% ( 1) 00:12:22.250 7.633 - 7.680: 99.1516% ( 1) 00:12:22.250 7.680 - 7.727: 99.1773% ( 3) 00:12:22.250 8.012 - 8.059: 99.1859% ( 1) 00:12:22.250 8.439 - 8.486: 99.1944% ( 1) 00:12:22.250 8.486 - 8.533: 99.2030% ( 1) 00:12:22.250 8.628 - 8.676: 99.2116% ( 1) 00:12:22.250 8.723 - 8.770: 99.2287% ( 2) 00:12:22.250 8.818 - 8.865: 99.2373% ( 1) 00:12:22.250 8.865 - 8.913: 99.2459% ( 1) 00:12:22.250 8.960 - 9.007: 99.2544% ( 1) 00:12:22.250 9.244 - 9.292: 99.2630% ( 1) 00:12:22.250 9.481 - 9.529: 99.2716% ( 1) 00:12:22.250 9.719 - 9.766: 99.2801% ( 1) 00:12:22.250 10.382 - 10.430: 99.2887% ( 1) 00:12:22.250 11.899 - 11.947: 99.2973% ( 1) 00:12:22.250 13.084 - 13.179: 99.3059% ( 1) 00:12:22.250 16.403 - 16.498: 99.3144% ( 1) 00:12:22.250 3980.705 - 4004.978: 99.7001% ( 45) 00:12:22.250 4004.978 - 4029.250: 100.0000% ( 35) 00:12:22.250 00:12:22.250 10:19:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:22.250 10:19:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:22.250 10:19:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:22.250 10:19:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:22.250 10:19:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:22.508 [ 00:12:22.508 { 00:12:22.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.508 "subtype": "Discovery", 00:12:22.508 "listen_addresses": [], 00:12:22.508 "allow_any_host": true, 00:12:22.508 "hosts": [] 00:12:22.508 }, 00:12:22.508 { 00:12:22.508 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:22.508 "subtype": "NVMe", 00:12:22.508 "listen_addresses": [ 00:12:22.508 { 00:12:22.508 "trtype": "VFIOUSER", 00:12:22.508 "adrfam": "IPv4", 00:12:22.508 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:22.508 "trsvcid": "0" 00:12:22.508 } 00:12:22.508 ], 00:12:22.508 "allow_any_host": true, 00:12:22.508 "hosts": [], 00:12:22.508 "serial_number": "SPDK1", 00:12:22.508 "model_number": "SPDK bdev Controller", 00:12:22.508 "max_namespaces": 32, 00:12:22.508 "min_cntlid": 1, 00:12:22.508 "max_cntlid": 65519, 00:12:22.508 "namespaces": [ 00:12:22.508 { 00:12:22.508 "nsid": 1, 00:12:22.508 "bdev_name": "Malloc1", 00:12:22.508 "name": "Malloc1", 00:12:22.508 "nguid": "D6CC3CDEFABA477FAA975D235D8EF32D", 00:12:22.508 "uuid": "d6cc3cde-faba-477f-aa97-5d235d8ef32d" 00:12:22.508 } 00:12:22.508 ] 00:12:22.508 }, 00:12:22.508 { 00:12:22.508 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:22.508 "subtype": "NVMe", 00:12:22.508 "listen_addresses": [ 00:12:22.508 { 00:12:22.508 "trtype": "VFIOUSER", 00:12:22.508 "adrfam": "IPv4", 00:12:22.508 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:22.508 "trsvcid": "0" 00:12:22.508 } 00:12:22.508 ], 00:12:22.508 "allow_any_host": true, 00:12:22.508 "hosts": [], 00:12:22.508 "serial_number": "SPDK2", 00:12:22.508 "model_number": "SPDK bdev Controller", 00:12:22.508 "max_namespaces": 32, 00:12:22.508 "min_cntlid": 1, 00:12:22.508 "max_cntlid": 65519, 00:12:22.508 "namespaces": [ 00:12:22.508 { 00:12:22.508 "nsid": 1, 00:12:22.508 "bdev_name": "Malloc2", 00:12:22.508 "name": "Malloc2", 00:12:22.508 "nguid": "E3969BDEE18B47328DBD71D87C2BAFA6", 00:12:22.508 "uuid": "e3969bde-e18b-4732-8dbd-71d87c2bafa6" 00:12:22.508 } 00:12:22.508 ] 00:12:22.508 } 00:12:22.508 ] 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1485033 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:22.508 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:22.774 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.774 [2024-07-25 10:19:12.399981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.055 Malloc3 00:12:23.055 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:23.352 [2024-07-25 10:19:12.868549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.352 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.352 Asynchronous Event Request test 00:12:23.352 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.352 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.352 Registering asynchronous event callbacks... 00:12:23.352 Starting namespace attribute notice tests for all controllers... 00:12:23.352 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:23.352 aer_cb - Changed Namespace 00:12:23.352 Cleaning up... 00:12:23.613 [ 00:12:23.613 { 00:12:23.613 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.613 "subtype": "Discovery", 00:12:23.613 "listen_addresses": [], 00:12:23.613 "allow_any_host": true, 00:12:23.613 "hosts": [] 00:12:23.613 }, 00:12:23.613 { 00:12:23.613 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.613 "subtype": "NVMe", 00:12:23.613 "listen_addresses": [ 00:12:23.613 { 00:12:23.613 "trtype": "VFIOUSER", 00:12:23.613 "adrfam": "IPv4", 00:12:23.613 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.613 "trsvcid": "0" 00:12:23.613 } 00:12:23.613 ], 00:12:23.613 "allow_any_host": true, 00:12:23.613 "hosts": [], 00:12:23.613 "serial_number": "SPDK1", 00:12:23.613 "model_number": "SPDK bdev Controller", 00:12:23.613 "max_namespaces": 32, 00:12:23.613 "min_cntlid": 1, 00:12:23.613 "max_cntlid": 65519, 00:12:23.613 "namespaces": [ 00:12:23.613 { 00:12:23.613 "nsid": 1, 00:12:23.613 "bdev_name": "Malloc1", 00:12:23.613 "name": "Malloc1", 00:12:23.613 "nguid": "D6CC3CDEFABA477FAA975D235D8EF32D", 00:12:23.613 "uuid": "d6cc3cde-faba-477f-aa97-5d235d8ef32d" 00:12:23.613 }, 00:12:23.613 { 00:12:23.613 "nsid": 2, 00:12:23.613 "bdev_name": "Malloc3", 00:12:23.613 "name": "Malloc3", 00:12:23.613 "nguid": "099CED6410FA4B679C689693B5A0090F", 00:12:23.613 "uuid": "099ced64-10fa-4b67-9c68-9693b5a0090f" 00:12:23.613 } 00:12:23.613 ] 00:12:23.613 }, 00:12:23.613 { 00:12:23.613 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.613 "subtype": "NVMe", 00:12:23.613 "listen_addresses": [ 00:12:23.613 { 00:12:23.613 "trtype": "VFIOUSER", 00:12:23.613 "adrfam": "IPv4", 00:12:23.613 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.613 "trsvcid": "0" 00:12:23.613 } 00:12:23.613 ], 00:12:23.613 "allow_any_host": true, 00:12:23.613 "hosts": [], 00:12:23.613 "serial_number": "SPDK2", 00:12:23.613 "model_number": "SPDK bdev Controller", 00:12:23.613 "max_namespaces": 32, 00:12:23.613 "min_cntlid": 1, 00:12:23.613 "max_cntlid": 65519, 00:12:23.613 "namespaces": [ 00:12:23.613 { 00:12:23.613 "nsid": 1, 00:12:23.613 "bdev_name": "Malloc2", 00:12:23.613 "name": "Malloc2", 00:12:23.613 "nguid": "E3969BDEE18B47328DBD71D87C2BAFA6", 00:12:23.613 "uuid": "e3969bde-e18b-4732-8dbd-71d87c2bafa6" 00:12:23.613 } 00:12:23.613 ] 00:12:23.613 } 00:12:23.613 ] 00:12:23.613 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1485033 00:12:23.613 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.613 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:23.613 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:23.613 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:23.613 [2024-07-25 10:19:13.196995] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:12:23.613 [2024-07-25 10:19:13.197048] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485141 ] 00:12:23.613 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.613 [2024-07-25 10:19:13.240589] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:23.613 [2024-07-25 10:19:13.249752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.613 [2024-07-25 10:19:13.249785] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f15a641f000 00:12:23.613 [2024-07-25 10:19:13.250753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.251756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.252768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.253771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.254781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.255795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.256803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.257808] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.613 [2024-07-25 10:19:13.258824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.613 [2024-07-25 10:19:13.258848] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f15a6414000 00:12:23.613 [2024-07-25 10:19:13.260295] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:23.613 [2024-07-25 10:19:13.277297] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:23.613 [2024-07-25 10:19:13.277334] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:23.613 [2024-07-25 10:19:13.282462] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.613 [2024-07-25 10:19:13.282531] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:23.613 [2024-07-25 10:19:13.282640] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:23.613 [2024-07-25 10:19:13.282669] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:23.613 [2024-07-25 10:19:13.282681] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:23.613 [2024-07-25 10:19:13.283463] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:23.613 [2024-07-25 10:19:13.283496] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:23.613 [2024-07-25 10:19:13.283514] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:23.613 [2024-07-25 10:19:13.284489] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.613 [2024-07-25 10:19:13.284512] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:23.613 [2024-07-25 10:19:13.284528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:23.613 [2024-07-25 10:19:13.285470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:23.613 [2024-07-25 10:19:13.285497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:23.613 [2024-07-25 10:19:13.286506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:23.613 [2024-07-25 10:19:13.286528] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:23.613 [2024-07-25 10:19:13.286538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:23.613 [2024-07-25 10:19:13.286552] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:23.613 [2024-07-25 10:19:13.286663] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:23.613 [2024-07-25 10:19:13.286677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:23.613 [2024-07-25 10:19:13.286687] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:23.613 [2024-07-25 10:19:13.287496] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:23.614 [2024-07-25 10:19:13.288500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:23.614 [2024-07-25 10:19:13.289495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:23.614 [2024-07-25 10:19:13.290499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.614 [2024-07-25 10:19:13.290572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:23.614 [2024-07-25 10:19:13.291510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:23.614 [2024-07-25 10:19:13.291531] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:23.614 [2024-07-25 10:19:13.291542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.291570] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:23.614 [2024-07-25 10:19:13.291585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.291610] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.614 [2024-07-25 10:19:13.291621] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.614 [2024-07-25 10:19:13.291628] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.614 [2024-07-25 10:19:13.291647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.300507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.300531] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:23.614 [2024-07-25 10:19:13.300541] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:23.614 [2024-07-25 10:19:13.300550] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:23.614 [2024-07-25 10:19:13.300559] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:23.614 [2024-07-25 10:19:13.300568] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:23.614 [2024-07-25 10:19:13.300577] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:23.614 [2024-07-25 10:19:13.300586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.300601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.300624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.308534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.614 [2024-07-25 10:19:13.308551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.614 [2024-07-25 10:19:13.308566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.614 [2024-07-25 10:19:13.308580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.614 [2024-07-25 10:19:13.308590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.308608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.308625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.316497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.316519] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:23.614 [2024-07-25 10:19:13.316530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.316549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.316562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.316578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.324502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.324591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.324609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.324625] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:23.614 [2024-07-25 10:19:13.324635] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:23.614 [2024-07-25 10:19:13.324642] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.614 [2024-07-25 10:19:13.324654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.332507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.332534] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:23.614 [2024-07-25 10:19:13.332552] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.332569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.332588] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.614 [2024-07-25 10:19:13.332599] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.614 [2024-07-25 10:19:13.332606] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.614 [2024-07-25 10:19:13.332617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.340530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.340549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.340564] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.614 [2024-07-25 10:19:13.340574] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.614 [2024-07-25 10:19:13.340581] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.614 [2024-07-25 10:19:13.340592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.348492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.348524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348599] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:23.614 [2024-07-25 10:19:13.348608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:23.614 [2024-07-25 10:19:13.348618] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:23.614 [2024-07-25 10:19:13.348645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.356499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.356536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.364494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.364523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.372491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.372519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:23.614 [2024-07-25 10:19:13.380507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:23.614 [2024-07-25 10:19:13.380544] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:23.614 [2024-07-25 10:19:13.380556] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:23.615 [2024-07-25 10:19:13.380564] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:23.615 [2024-07-25 10:19:13.380571] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:23.615 [2024-07-25 10:19:13.380578] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:23.615 [2024-07-25 10:19:13.380589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:23.615 [2024-07-25 10:19:13.380603] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:23.615 [2024-07-25 10:19:13.380612] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:23.615 [2024-07-25 10:19:13.380619] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.615 [2024-07-25 10:19:13.380630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:23.615 [2024-07-25 10:19:13.380643] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:23.615 [2024-07-25 10:19:13.380653] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.615 [2024-07-25 10:19:13.380660] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.615 [2024-07-25 10:19:13.380670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.615 [2024-07-25 10:19:13.380685] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:23.615 [2024-07-25 10:19:13.380694] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:23.615 [2024-07-25 10:19:13.380701] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.615 [2024-07-25 10:19:13.380712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:23.615 [2024-07-25 10:19:13.388496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:23.615 [2024-07-25 10:19:13.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:23.615 [2024-07-25 10:19:13.388548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:23.615 [2024-07-25 10:19:13.388561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:23.615 ===================================================== 00:12:23.615 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.615 ===================================================== 00:12:23.615 Controller Capabilities/Features 00:12:23.615 ================================ 00:12:23.615 Vendor ID: 4e58 00:12:23.615 Subsystem Vendor ID: 4e58 00:12:23.615 Serial Number: SPDK2 00:12:23.615 Model Number: SPDK bdev Controller 00:12:23.615 Firmware Version: 24.09 00:12:23.615 Recommended Arb Burst: 6 00:12:23.615 IEEE OUI Identifier: 8d 6b 50 00:12:23.615 Multi-path I/O 00:12:23.615 May have multiple subsystem ports: Yes 00:12:23.615 May have multiple controllers: Yes 00:12:23.615 Associated with SR-IOV VF: No 00:12:23.615 Max Data Transfer Size: 131072 00:12:23.615 Max Number of Namespaces: 32 00:12:23.615 Max Number of I/O Queues: 127 00:12:23.615 NVMe Specification Version (VS): 1.3 00:12:23.615 NVMe Specification Version (Identify): 1.3 00:12:23.615 Maximum Queue Entries: 256 00:12:23.615 Contiguous Queues Required: Yes 00:12:23.615 Arbitration Mechanisms Supported 00:12:23.615 Weighted Round Robin: Not Supported 00:12:23.615 Vendor Specific: Not Supported 00:12:23.615 Reset Timeout: 15000 ms 00:12:23.615 Doorbell Stride: 4 bytes 00:12:23.615 NVM Subsystem Reset: Not Supported 00:12:23.615 Command Sets Supported 00:12:23.615 NVM Command Set: Supported 00:12:23.615 Boot Partition: Not Supported 00:12:23.615 Memory Page Size Minimum: 4096 bytes 00:12:23.615 Memory Page Size Maximum: 4096 bytes 00:12:23.615 Persistent Memory Region: Not Supported 00:12:23.615 Optional Asynchronous Events Supported 00:12:23.615 Namespace Attribute Notices: Supported 00:12:23.615 Firmware Activation Notices: Not Supported 00:12:23.615 ANA Change Notices: Not Supported 00:12:23.615 PLE Aggregate Log Change Notices: Not Supported 00:12:23.615 LBA Status Info Alert Notices: Not Supported 00:12:23.615 EGE Aggregate Log Change Notices: Not Supported 00:12:23.615 Normal NVM Subsystem Shutdown event: Not Supported 00:12:23.615 Zone Descriptor Change Notices: Not Supported 00:12:23.615 Discovery Log Change Notices: Not Supported 00:12:23.615 Controller Attributes 00:12:23.615 128-bit Host Identifier: Supported 00:12:23.615 Non-Operational Permissive Mode: Not Supported 00:12:23.615 NVM Sets: Not Supported 00:12:23.615 Read Recovery Levels: Not Supported 00:12:23.615 Endurance Groups: Not Supported 00:12:23.615 Predictable Latency Mode: Not Supported 00:12:23.615 Traffic Based Keep ALive: Not Supported 00:12:23.615 Namespace Granularity: Not Supported 00:12:23.615 SQ Associations: Not Supported 00:12:23.615 UUID List: Not Supported 00:12:23.615 Multi-Domain Subsystem: Not Supported 00:12:23.615 Fixed Capacity Management: Not Supported 00:12:23.615 Variable Capacity Management: Not Supported 00:12:23.615 Delete Endurance Group: Not Supported 00:12:23.615 Delete NVM Set: Not Supported 00:12:23.615 Extended LBA Formats Supported: Not Supported 00:12:23.615 Flexible Data Placement Supported: Not Supported 00:12:23.615 00:12:23.615 Controller Memory Buffer Support 00:12:23.615 ================================ 00:12:23.615 Supported: No 00:12:23.615 00:12:23.615 Persistent Memory Region Support 00:12:23.615 ================================ 00:12:23.615 Supported: No 00:12:23.615 00:12:23.615 Admin Command Set Attributes 00:12:23.615 ============================ 00:12:23.615 Security Send/Receive: Not Supported 00:12:23.615 Format NVM: Not Supported 00:12:23.615 Firmware Activate/Download: Not Supported 00:12:23.615 Namespace Management: Not Supported 00:12:23.615 Device Self-Test: Not Supported 00:12:23.615 Directives: Not Supported 00:12:23.615 NVMe-MI: Not Supported 00:12:23.615 Virtualization Management: Not Supported 00:12:23.615 Doorbell Buffer Config: Not Supported 00:12:23.615 Get LBA Status Capability: Not Supported 00:12:23.615 Command & Feature Lockdown Capability: Not Supported 00:12:23.615 Abort Command Limit: 4 00:12:23.615 Async Event Request Limit: 4 00:12:23.615 Number of Firmware Slots: N/A 00:12:23.615 Firmware Slot 1 Read-Only: N/A 00:12:23.876 Firmware Activation Without Reset: N/A 00:12:23.876 Multiple Update Detection Support: N/A 00:12:23.876 Firmware Update Granularity: No Information Provided 00:12:23.876 Per-Namespace SMART Log: No 00:12:23.876 Asymmetric Namespace Access Log Page: Not Supported 00:12:23.876 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:23.876 Command Effects Log Page: Supported 00:12:23.876 Get Log Page Extended Data: Supported 00:12:23.876 Telemetry Log Pages: Not Supported 00:12:23.876 Persistent Event Log Pages: Not Supported 00:12:23.876 Supported Log Pages Log Page: May Support 00:12:23.876 Commands Supported & Effects Log Page: Not Supported 00:12:23.876 Feature Identifiers & Effects Log Page:May Support 00:12:23.876 NVMe-MI Commands & Effects Log Page: May Support 00:12:23.876 Data Area 4 for Telemetry Log: Not Supported 00:12:23.876 Error Log Page Entries Supported: 128 00:12:23.876 Keep Alive: Supported 00:12:23.876 Keep Alive Granularity: 10000 ms 00:12:23.876 00:12:23.876 NVM Command Set Attributes 00:12:23.876 ========================== 00:12:23.876 Submission Queue Entry Size 00:12:23.876 Max: 64 00:12:23.876 Min: 64 00:12:23.876 Completion Queue Entry Size 00:12:23.876 Max: 16 00:12:23.876 Min: 16 00:12:23.876 Number of Namespaces: 32 00:12:23.876 Compare Command: Supported 00:12:23.876 Write Uncorrectable Command: Not Supported 00:12:23.876 Dataset Management Command: Supported 00:12:23.876 Write Zeroes Command: Supported 00:12:23.876 Set Features Save Field: Not Supported 00:12:23.876 Reservations: Not Supported 00:12:23.876 Timestamp: Not Supported 00:12:23.876 Copy: Supported 00:12:23.876 Volatile Write Cache: Present 00:12:23.876 Atomic Write Unit (Normal): 1 00:12:23.876 Atomic Write Unit (PFail): 1 00:12:23.876 Atomic Compare & Write Unit: 1 00:12:23.876 Fused Compare & Write: Supported 00:12:23.876 Scatter-Gather List 00:12:23.876 SGL Command Set: Supported (Dword aligned) 00:12:23.876 SGL Keyed: Not Supported 00:12:23.876 SGL Bit Bucket Descriptor: Not Supported 00:12:23.876 SGL Metadata Pointer: Not Supported 00:12:23.876 Oversized SGL: Not Supported 00:12:23.876 SGL Metadata Address: Not Supported 00:12:23.876 SGL Offset: Not Supported 00:12:23.876 Transport SGL Data Block: Not Supported 00:12:23.876 Replay Protected Memory Block: Not Supported 00:12:23.876 00:12:23.876 Firmware Slot Information 00:12:23.876 ========================= 00:12:23.876 Active slot: 1 00:12:23.876 Slot 1 Firmware Revision: 24.09 00:12:23.876 00:12:23.876 00:12:23.876 Commands Supported and Effects 00:12:23.876 ============================== 00:12:23.876 Admin Commands 00:12:23.876 -------------- 00:12:23.876 Get Log Page (02h): Supported 00:12:23.876 Identify (06h): Supported 00:12:23.876 Abort (08h): Supported 00:12:23.876 Set Features (09h): Supported 00:12:23.876 Get Features (0Ah): Supported 00:12:23.876 Asynchronous Event Request (0Ch): Supported 00:12:23.876 Keep Alive (18h): Supported 00:12:23.876 I/O Commands 00:12:23.876 ------------ 00:12:23.876 Flush (00h): Supported LBA-Change 00:12:23.876 Write (01h): Supported LBA-Change 00:12:23.876 Read (02h): Supported 00:12:23.876 Compare (05h): Supported 00:12:23.876 Write Zeroes (08h): Supported LBA-Change 00:12:23.876 Dataset Management (09h): Supported LBA-Change 00:12:23.876 Copy (19h): Supported LBA-Change 00:12:23.876 00:12:23.876 Error Log 00:12:23.876 ========= 00:12:23.876 00:12:23.876 Arbitration 00:12:23.876 =========== 00:12:23.876 Arbitration Burst: 1 00:12:23.876 00:12:23.876 Power Management 00:12:23.876 ================ 00:12:23.876 Number of Power States: 1 00:12:23.876 Current Power State: Power State #0 00:12:23.876 Power State #0: 00:12:23.876 Max Power: 0.00 W 00:12:23.876 Non-Operational State: Operational 00:12:23.876 Entry Latency: Not Reported 00:12:23.876 Exit Latency: Not Reported 00:12:23.876 Relative Read Throughput: 0 00:12:23.876 Relative Read Latency: 0 00:12:23.876 Relative Write Throughput: 0 00:12:23.876 Relative Write Latency: 0 00:12:23.876 Idle Power: Not Reported 00:12:23.876 Active Power: Not Reported 00:12:23.876 Non-Operational Permissive Mode: Not Supported 00:12:23.876 00:12:23.876 Health Information 00:12:23.876 ================== 00:12:23.876 Critical Warnings: 00:12:23.876 Available Spare Space: OK 00:12:23.876 Temperature: OK 00:12:23.876 Device Reliability: OK 00:12:23.876 Read Only: No 00:12:23.876 Volatile Memory Backup: OK 00:12:23.876 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:23.876 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:23.876 Available Spare: 0% 00:12:23.876 Available Sp[2024-07-25 10:19:13.388709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:23.876 [2024-07-25 10:19:13.396513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:23.876 [2024-07-25 10:19:13.396573] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:23.876 [2024-07-25 10:19:13.396597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.876 [2024-07-25 10:19:13.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.876 [2024-07-25 10:19:13.396623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.876 [2024-07-25 10:19:13.396634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.876 [2024-07-25 10:19:13.396713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:23.876 [2024-07-25 10:19:13.396737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:23.876 [2024-07-25 10:19:13.397723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.876 [2024-07-25 10:19:13.397804] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:23.876 [2024-07-25 10:19:13.397820] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:23.876 [2024-07-25 10:19:13.398731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:23.876 [2024-07-25 10:19:13.398757] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:23.876 [2024-07-25 10:19:13.398831] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:23.876 [2024-07-25 10:19:13.400373] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:23.876 are Threshold: 0% 00:12:23.876 Life Percentage Used: 0% 00:12:23.876 Data Units Read: 0 00:12:23.876 Data Units Written: 0 00:12:23.876 Host Read Commands: 0 00:12:23.876 Host Write Commands: 0 00:12:23.876 Controller Busy Time: 0 minutes 00:12:23.876 Power Cycles: 0 00:12:23.876 Power On Hours: 0 hours 00:12:23.876 Unsafe Shutdowns: 0 00:12:23.876 Unrecoverable Media Errors: 0 00:12:23.876 Lifetime Error Log Entries: 0 00:12:23.876 Warning Temperature Time: 0 minutes 00:12:23.876 Critical Temperature Time: 0 minutes 00:12:23.876 00:12:23.876 Number of Queues 00:12:23.876 ================ 00:12:23.876 Number of I/O Submission Queues: 127 00:12:23.876 Number of I/O Completion Queues: 127 00:12:23.876 00:12:23.876 Active Namespaces 00:12:23.876 ================= 00:12:23.876 Namespace ID:1 00:12:23.876 Error Recovery Timeout: Unlimited 00:12:23.876 Command Set Identifier: NVM (00h) 00:12:23.876 Deallocate: Supported 00:12:23.876 Deallocated/Unwritten Error: Not Supported 00:12:23.876 Deallocated Read Value: Unknown 00:12:23.876 Deallocate in Write Zeroes: Not Supported 00:12:23.876 Deallocated Guard Field: 0xFFFF 00:12:23.876 Flush: Supported 00:12:23.876 Reservation: Supported 00:12:23.876 Namespace Sharing Capabilities: Multiple Controllers 00:12:23.876 Size (in LBAs): 131072 (0GiB) 00:12:23.876 Capacity (in LBAs): 131072 (0GiB) 00:12:23.876 Utilization (in LBAs): 131072 (0GiB) 00:12:23.876 NGUID: E3969BDEE18B47328DBD71D87C2BAFA6 00:12:23.876 UUID: e3969bde-e18b-4732-8dbd-71d87c2bafa6 00:12:23.876 Thin Provisioning: Not Supported 00:12:23.876 Per-NS Atomic Units: Yes 00:12:23.876 Atomic Boundary Size (Normal): 0 00:12:23.876 Atomic Boundary Size (PFail): 0 00:12:23.876 Atomic Boundary Offset: 0 00:12:23.876 Maximum Single Source Range Length: 65535 00:12:23.876 Maximum Copy Length: 65535 00:12:23.876 Maximum Source Range Count: 1 00:12:23.876 NGUID/EUI64 Never Reused: No 00:12:23.876 Namespace Write Protected: No 00:12:23.877 Number of LBA Formats: 1 00:12:23.877 Current LBA Format: LBA Format #00 00:12:23.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:23.877 00:12:23.877 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:23.877 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.877 [2024-07-25 10:19:13.641571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.152 Initializing NVMe Controllers 00:12:29.153 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:29.153 Initialization complete. Launching workers. 00:12:29.153 ======================================================== 00:12:29.153 Latency(us) 00:12:29.153 Device Information : IOPS MiB/s Average min max 00:12:29.153 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24110.68 94.18 5308.18 1497.38 11566.66 00:12:29.153 ======================================================== 00:12:29.153 Total : 24110.68 94.18 5308.18 1497.38 11566.66 00:12:29.153 00:12:29.153 [2024-07-25 10:19:18.745801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.153 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:29.153 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.412 [2024-07-25 10:19:18.981452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.692 Initializing NVMe Controllers 00:12:34.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:34.692 Initialization complete. Launching workers. 00:12:34.692 ======================================================== 00:12:34.692 Latency(us) 00:12:34.692 Device Information : IOPS MiB/s Average min max 00:12:34.692 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24124.10 94.23 5305.86 1500.32 11261.94 00:12:34.692 ======================================================== 00:12:34.692 Total : 24124.10 94.23 5305.86 1500.32 11261.94 00:12:34.692 00:12:34.692 [2024-07-25 10:19:24.006643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.692 10:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:34.692 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.692 [2024-07-25 10:19:24.230161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.974 [2024-07-25 10:19:29.353834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.974 Initializing NVMe Controllers 00:12:39.974 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.974 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:39.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:39.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:39.974 Initialization complete. Launching workers. 00:12:39.974 Starting thread on core 2 00:12:39.974 Starting thread on core 3 00:12:39.974 Starting thread on core 1 00:12:39.974 10:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:39.974 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.974 [2024-07-25 10:19:29.640937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.266 [2024-07-25 10:19:32.696857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.266 Initializing NVMe Controllers 00:12:43.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:43.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:43.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:43.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:43.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:43.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:43.266 Initialization complete. Launching workers. 00:12:43.266 Starting thread on core 1 with urgent priority queue 00:12:43.266 Starting thread on core 2 with urgent priority queue 00:12:43.266 Starting thread on core 3 with urgent priority queue 00:12:43.266 Starting thread on core 0 with urgent priority queue 00:12:43.266 SPDK bdev Controller (SPDK2 ) core 0: 6434.67 IO/s 15.54 secs/100000 ios 00:12:43.266 SPDK bdev Controller (SPDK2 ) core 1: 7362.67 IO/s 13.58 secs/100000 ios 00:12:43.266 SPDK bdev Controller (SPDK2 ) core 2: 7302.33 IO/s 13.69 secs/100000 ios 00:12:43.266 SPDK bdev Controller (SPDK2 ) core 3: 6920.00 IO/s 14.45 secs/100000 ios 00:12:43.266 ======================================================== 00:12:43.266 00:12:43.266 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.266 [2024-07-25 10:19:32.973449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.266 Initializing NVMe Controllers 00:12:43.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.266 Namespace ID: 1 size: 0GB 00:12:43.266 Initialization complete. 00:12:43.266 INFO: using host memory buffer for IO 00:12:43.266 Hello world! 00:12:43.266 [2024-07-25 10:19:32.985525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.266 10:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.527 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.527 [2024-07-25 10:19:33.262455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:44.904 Initializing NVMe Controllers 00:12:44.904 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:44.904 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:44.904 Initialization complete. Launching workers. 00:12:44.904 submit (in ns) avg, min, max = 11326.9, 4468.1, 4019906.7 00:12:44.904 complete (in ns) avg, min, max = 26848.1, 2656.3, 5000290.4 00:12:44.904 00:12:44.904 Submit histogram 00:12:44.904 ================ 00:12:44.904 Range in us Cumulative Count 00:12:44.904 4.456 - 4.480: 0.0085% ( 1) 00:12:44.904 4.480 - 4.504: 0.1701% ( 19) 00:12:44.904 4.504 - 4.527: 1.0547% ( 104) 00:12:44.904 4.527 - 4.551: 3.4533% ( 282) 00:12:44.904 4.551 - 4.575: 7.8506% ( 517) 00:12:44.904 4.575 - 4.599: 12.9795% ( 603) 00:12:44.904 4.599 - 4.622: 16.6709% ( 434) 00:12:44.904 4.622 - 4.646: 18.4401% ( 208) 00:12:44.904 4.646 - 4.670: 19.0610% ( 73) 00:12:44.904 4.670 - 4.693: 19.7244% ( 78) 00:12:44.904 4.693 - 4.717: 21.0938% ( 161) 00:12:44.904 4.717 - 4.741: 24.2239% ( 368) 00:12:44.904 4.741 - 4.764: 29.0976% ( 573) 00:12:44.904 4.764 - 4.788: 34.5581% ( 642) 00:12:44.904 4.788 - 4.812: 37.9093% ( 394) 00:12:44.904 4.812 - 4.836: 38.9725% ( 125) 00:12:44.904 4.836 - 4.859: 39.2787% ( 36) 00:12:44.904 4.859 - 4.883: 39.6190% ( 40) 00:12:44.904 4.883 - 4.907: 40.0357% ( 49) 00:12:44.904 4.907 - 4.930: 40.5971% ( 66) 00:12:44.904 4.930 - 4.954: 41.1500% ( 65) 00:12:44.904 4.954 - 4.978: 41.5837% ( 51) 00:12:44.904 4.978 - 5.001: 41.9665% ( 45) 00:12:44.904 5.001 - 5.025: 42.2727% ( 36) 00:12:44.904 5.025 - 5.049: 42.4258% ( 18) 00:12:44.904 5.049 - 5.073: 42.5449% ( 14) 00:12:44.904 5.073 - 5.096: 42.6129% ( 8) 00:12:44.904 5.096 - 5.120: 42.8085% ( 23) 00:12:44.904 5.120 - 5.144: 43.2849% ( 56) 00:12:44.904 5.144 - 5.167: 45.3007% ( 237) 00:12:44.904 5.167 - 5.191: 48.8730% ( 420) 00:12:44.904 5.191 - 5.215: 53.3044% ( 521) 00:12:44.904 5.215 - 5.239: 55.5074% ( 259) 00:12:44.904 5.239 - 5.262: 56.3494% ( 99) 00:12:44.904 5.262 - 5.286: 57.1489% ( 94) 00:12:44.904 5.286 - 5.310: 58.5949% ( 170) 00:12:44.904 5.310 - 5.333: 62.1417% ( 417) 00:12:44.904 5.333 - 5.357: 65.7481% ( 424) 00:12:44.904 5.357 - 5.381: 68.9802% ( 380) 00:12:44.904 5.381 - 5.404: 70.2815% ( 153) 00:12:44.904 5.404 - 5.428: 71.3617% ( 127) 00:12:44.904 5.428 - 5.452: 72.7652% ( 165) 00:12:44.904 5.452 - 5.476: 73.3350% ( 67) 00:12:44.904 5.476 - 5.499: 73.4796% ( 17) 00:12:44.904 5.499 - 5.523: 73.7178% ( 28) 00:12:44.904 5.523 - 5.547: 75.9292% ( 260) 00:12:44.904 5.547 - 5.570: 81.0241% ( 599) 00:12:44.904 5.570 - 5.594: 87.8285% ( 800) 00:12:44.904 5.594 - 5.618: 92.0813% ( 500) 00:12:44.904 5.618 - 5.641: 92.9659% ( 104) 00:12:44.904 5.641 - 5.665: 93.3486% ( 45) 00:12:44.904 5.665 - 5.689: 93.5953% ( 29) 00:12:44.904 5.689 - 5.713: 93.8250% ( 27) 00:12:44.904 5.713 - 5.736: 93.9610% ( 16) 00:12:44.904 5.736 - 5.760: 94.0461% ( 10) 00:12:44.904 5.760 - 5.784: 94.2332% ( 22) 00:12:44.905 5.784 - 5.807: 94.3693% ( 16) 00:12:44.905 5.807 - 5.831: 94.7180% ( 41) 00:12:44.905 5.831 - 5.855: 94.9307% ( 25) 00:12:44.905 5.855 - 5.879: 95.1433% ( 25) 00:12:44.905 5.879 - 5.902: 95.2709% ( 15) 00:12:44.905 5.902 - 5.926: 95.3815% ( 13) 00:12:44.905 5.926 - 5.950: 95.5941% ( 25) 00:12:44.905 5.950 - 5.973: 95.6962% ( 12) 00:12:44.905 5.973 - 5.997: 95.7897% ( 11) 00:12:44.905 5.997 - 6.021: 95.8578% ( 8) 00:12:44.905 6.021 - 6.044: 95.9173% ( 7) 00:12:44.905 6.044 - 6.068: 95.9684% ( 6) 00:12:44.905 6.068 - 6.116: 96.0704% ( 12) 00:12:44.905 6.116 - 6.163: 96.2065% ( 16) 00:12:44.905 6.163 - 6.210: 96.3256% ( 14) 00:12:44.905 6.210 - 6.258: 96.4362% ( 13) 00:12:44.905 6.258 - 6.305: 96.5808% ( 17) 00:12:44.905 6.305 - 6.353: 96.7339% ( 18) 00:12:44.905 6.353 - 6.400: 96.8444% ( 13) 00:12:44.905 6.400 - 6.447: 96.8614% ( 2) 00:12:44.905 6.447 - 6.495: 97.0316% ( 20) 00:12:44.905 6.495 - 6.542: 97.2697% ( 28) 00:12:44.905 6.542 - 6.590: 97.3037% ( 4) 00:12:44.905 6.590 - 6.637: 97.3803% ( 9) 00:12:44.905 6.637 - 6.684: 97.4824% ( 12) 00:12:44.905 6.684 - 6.732: 97.5249% ( 5) 00:12:44.905 6.732 - 6.779: 97.5589% ( 4) 00:12:44.905 6.779 - 6.827: 97.5844% ( 3) 00:12:44.905 6.827 - 6.874: 97.8396% ( 30) 00:12:44.905 6.874 - 6.921: 98.4775% ( 75) 00:12:44.905 6.921 - 6.969: 98.8007% ( 38) 00:12:44.905 6.969 - 7.016: 99.0134% ( 25) 00:12:44.905 7.016 - 7.064: 99.0984% ( 10) 00:12:44.905 7.111 - 7.159: 99.1239% ( 3) 00:12:44.905 7.443 - 7.490: 99.1324% ( 1) 00:12:44.905 7.490 - 7.538: 99.1494% ( 2) 00:12:44.905 7.538 - 7.585: 99.1579% ( 1) 00:12:44.905 7.585 - 7.633: 99.1665% ( 1) 00:12:44.905 7.680 - 7.727: 99.1750% ( 1) 00:12:44.905 7.727 - 7.775: 99.1835% ( 1) 00:12:44.905 7.775 - 7.822: 99.1920% ( 1) 00:12:44.905 8.059 - 8.107: 99.2005% ( 1) 00:12:44.905 8.249 - 8.296: 99.2090% ( 1) 00:12:44.905 8.391 - 8.439: 99.2175% ( 1) 00:12:44.905 8.439 - 8.486: 99.2260% ( 1) 00:12:44.905 8.818 - 8.865: 99.2345% ( 1) 00:12:44.905 8.960 - 9.007: 99.2430% ( 1) 00:12:44.905 9.055 - 9.102: 99.2515% ( 1) 00:12:44.905 9.150 - 9.197: 99.2600% ( 1) 00:12:44.905 9.339 - 9.387: 99.2685% ( 1) 00:12:44.905 9.387 - 9.434: 99.2940% ( 3) 00:12:44.905 9.434 - 9.481: 99.3025% ( 1) 00:12:44.905 9.576 - 9.624: 99.3110% ( 1) 00:12:44.905 9.624 - 9.671: 99.3281% ( 2) 00:12:44.905 9.766 - 9.813: 99.3366% ( 1) 00:12:44.905 9.813 - 9.861: 99.3451% ( 1) 00:12:44.905 9.861 - 9.908: 99.3536% ( 1) 00:12:44.905 10.003 - 10.050: 99.3621% ( 1) 00:12:44.905 10.050 - 10.098: 99.3706% ( 1) 00:12:44.905 10.098 - 10.145: 99.3961% ( 3) 00:12:44.905 10.193 - 10.240: 99.4216% ( 3) 00:12:44.905 10.240 - 10.287: 99.4301% ( 1) 00:12:44.905 10.287 - 10.335: 99.4386% ( 1) 00:12:44.905 10.335 - 10.382: 99.4471% ( 1) 00:12:44.905 10.430 - 10.477: 99.4556% ( 1) 00:12:44.905 10.667 - 10.714: 99.4641% ( 1) 00:12:44.905 10.714 - 10.761: 99.4812% ( 2) 00:12:44.905 10.761 - 10.809: 99.4897% ( 1) 00:12:44.905 10.809 - 10.856: 99.4982% ( 1) 00:12:44.905 10.904 - 10.951: 99.5152% ( 2) 00:12:44.905 10.999 - 11.046: 99.5237% ( 1) 00:12:44.905 11.188 - 11.236: 99.5322% ( 1) 00:12:44.905 11.425 - 11.473: 99.5407% ( 1) 00:12:44.905 11.520 - 11.567: 99.5492% ( 1) 00:12:44.905 11.567 - 11.615: 99.5577% ( 1) 00:12:44.905 11.662 - 11.710: 99.5662% ( 1) 00:12:44.905 11.710 - 11.757: 99.5747% ( 1) 00:12:44.905 11.899 - 11.947: 99.5832% ( 1) 00:12:44.905 11.947 - 11.994: 99.5917% ( 1) 00:12:44.905 12.136 - 12.231: 99.6087% ( 2) 00:12:44.905 12.421 - 12.516: 99.6258% ( 2) 00:12:44.905 12.516 - 12.610: 99.6343% ( 1) 00:12:44.905 12.705 - 12.800: 99.6428% ( 1) 00:12:44.905 12.800 - 12.895: 99.6513% ( 1) 00:12:44.905 13.084 - 13.179: 99.6598% ( 1) 00:12:44.905 13.179 - 13.274: 99.6853% ( 3) 00:12:44.905 13.274 - 13.369: 99.7023% ( 2) 00:12:44.905 13.369 - 13.464: 99.7108% ( 1) 00:12:44.905 13.843 - 13.938: 99.7278% ( 2) 00:12:44.905 13.938 - 14.033: 99.7618% ( 4) 00:12:44.905 14.033 - 14.127: 99.7789% ( 2) 00:12:44.905 14.317 - 14.412: 99.7874% ( 1) 00:12:44.905 14.601 - 14.696: 99.7959% ( 1) 00:12:44.905 17.541 - 17.636: 99.8044% ( 1) 00:12:44.905 17.730 - 17.825: 99.8129% ( 1) 00:12:44.905 17.920 - 18.015: 99.8214% ( 1) 00:12:44.905 18.110 - 18.204: 99.8299% ( 1) 00:12:44.905 19.437 - 19.532: 99.8384% ( 1) 00:12:44.905 25.790 - 25.979: 99.8469% ( 1) 00:12:44.905 3980.705 - 4004.978: 99.8894% ( 5) 00:12:44.905 4004.978 - 4029.250: 100.0000% ( 13) 00:12:44.905 00:12:44.905 Complete histogram 00:12:44.905 ================== 00:12:44.905 Range in us Cumulative Count 00:12:44.905 2.655 - 2.667: 1.6161% ( 190) 00:12:44.905 2.667 - 2.679: 21.2214% ( 2305) 00:12:44.905 2.679 - 2.690: 45.5984% ( 2866) 00:12:44.905 2.690 - 2.702: 51.1610% ( 654) 00:12:44.905 2.702 - 2.714: 63.2729% ( 1424) 00:12:44.905 2.714 - 2.726: 82.2149% ( 2227) 00:12:44.905 2.726 - 2.738: 89.6827% ( 878) 00:12:44.905 2.738 - 2.750: 94.1482% ( 525) 00:12:44.905 2.750 - 2.761: 96.3766% ( 262) 00:12:44.905 2.761 - 2.773: 97.3888% ( 119) 00:12:44.905 2.773 - 2.785: 97.9417% ( 65) 00:12:44.905 2.785 - 2.797: 98.1543% ( 25) 00:12:44.905 2.797 - 2.809: 98.2053% ( 6) 00:12:44.905 2.809 - 2.821: 98.2479% ( 5) 00:12:44.905 2.821 - 2.833: 98.2819% ( 4) 00:12:44.905 2.833 - 2.844: 98.3074% ( 3) 00:12:44.905 2.844 - 2.856: 98.3329% ( 3) 00:12:44.905 2.856 - 2.868: 98.3499% ( 2) 00:12:44.905 2.868 - 2.880: 98.3584% ( 1) 00:12:44.905 2.880 - 2.892: 98.3754% ( 2) 00:12:44.905 2.892 - 2.904: 98.4180% ( 5) 00:12:44.905 2.916 - 2.927: 98.4520% ( 4) 00:12:44.905 2.927 - 2.939: 98.4605% ( 1) 00:12:44.905 2.939 - 2.951: 98.4690% ( 1) 00:12:44.905 2.951 - 2.963: 98.4775% ( 1) 00:12:44.905 2.963 - 2.975: 98.4860% ( 1) 00:12:44.905 2.987 - 2.999: 98.4945% ( 1) 00:12:44.905 3.022 - 3.034: 98.5115% ( 2) 00:12:44.905 3.034 - 3.058: 98.5285% ( 2) 00:12:44.905 3.153 - 3.176: 98.5370% ( 1) 00:12:44.905 3.176 - 3.200: 98.5455% ( 1) 00:12:44.905 3.200 - 3.224: 98.5541% ( 1) 00:12:44.905 3.224 - 3.247: 98.5626% ( 1) 00:12:44.905 3.247 - 3.271: 98.5881% ( 3) 00:12:44.905 3.271 - 3.295: 98.6136% ( 3) 00:12:44.905 3.295 - 3.319: 98.6306% ( 2) 00:12:44.905 3.319 - 3.342: 98.6646% ( 4) 00:12:44.905 3.342 - 3.366: 98.7072% ( 5) 00:12:44.905 3.366 - 3.390: 98.7327% ( 3) 00:12:44.905 3.390 - 3.413: 98.7412% ( 1) 00:12:44.905 3.413 - 3.437: 98.7752% ( 4) 00:12:44.905 3.437 - 3.461: 98.7922% ( 2) 00:12:44.905 3.461 - 3.484: 98.8092% ( 2) 00:12:44.905 3.484 - 3.508: 98.8177% ( 1) 00:12:44.905 3.508 - 3.532: 98.8432% ( 3) 00:12:44.905 3.532 - 3.556: 98.8688% ( 3) 00:12:44.905 3.556 - 3.579: 98.9028% ( 4) 00:12:44.905 3.579 - 3.603: 98.9113% ( 1) 00:12:44.905 3.603 - 3.627: 98.9198% ( 1) 00:12:44.905 3.627 - 3.650: 98.9283% ( 1) 00:12:44.905 3.650 - 3.674: 98.9453% ( 2) 00:12:44.905 3.674 - 3.698: 98.9623% ( 2) 00:12:44.905 3.698 - 3.721: 98.9878% ( 3) 00:12:44.905 3.721 - 3.745: 98.9963% ( 1) 00:12:44.905 3.769 - 3.793: 99.0219% ( 3) 00:12:44.905 3.793 - 3.816: 99.0304% ( 1) 00:12:44.905 3.840 - 3.864: 99.0389% ( 1) 00:12:44.905 4.006 - 4.030: 99.0474% ( 1) 00:12:44.905 4.101 - 4.124: 99.0559% ( 1) 00:12:44.905 4.148 - 4.172: 99.0644% ( 1) 00:12:44.905 4.290 - 4.314: 99.0729% ( 1) 00:12:44.906 4.338 - 4.361: 99.0814% ( 1) 00:12:44.906 4.385 - 4.409: 99.0899% ( 1) 00:12:44.906 4.599 - 4.622: 99.0984% ( 1) 00:12:44.906 4.670 - 4.693: 99.1154% ( 2) 00:12:44.906 4.764 - 4.788: 99.1239% ( 1) 00:12:44.906 4.978 - 5.001: 99.1324% ( 1) 00:12:44.906 5.855 - 5.879: 99.1409% ( 1) 00:12:44.906 6.353 - 6.400: 99.1494% ( 1) 00:12:44.906 6.447 - 6.495: 99.1579% ( 1) 00:12:44.906 6.542 - 6.590: 99.1665% ( 1) 00:12:44.906 6.874 - 6.921: 99.1750% ( 1) 00:12:44.906 7.064 - 7.111: 99.1835% ( 1) 00:12:44.906 7.301 - 7.348: 99.2005% ( 2) 00:12:44.906 7.443 - 7.490: 99.2090% ( 1) 00:12:44.906 7.680 - 7.727: 99.2175% ( 1) 00:12:44.906 8.201 - 8.249: 99.2260% ( 1) 00:12:44.906 8.770 - 8.818: 99.2430% ( 2) 00:12:44.906 8.913 - 8.960: 99.2515% ( 1) 00:12:44.906 9.339 - 9.387: 99.2685% ( 2) 00:12:44.906 9.576 - 9.624: 99.2855% ( 2) 00:12:44.906 9.719 - 9.766: 99.2940% ( 1) 00:12:44.906 9.908 - 9.956: 99.3025% ( 1) 00:12:44.906 10.240 - 10.287: 99.3110% ( 1) 00:12:44.906 10.430 - 10.477: 99.3196% ( 1) 00:12:44.906 10.667 - 10.714: 99.3281% ( 1) 00:12:44.906 12.089 - 12.136: 99.3366% ( 1) 00:12:44.906 13.748 - 13.843: 99.3451% ( 1) 00:12:44.906 15.644 - 15.739: 99.3536% ( 1) 00:12:44.906 15.739 - 15.834: 99.3621% ( 1) 00:12:44.906 15.834 - 15.929: 99.3706% ( 1) 00:12:44.906 17.351 - 17.446: 99.3791% ( 1) 00:12:44.906 26.169 - 26.359: 99.3876% ( 1) 00:12:44.906 78.886 - 79.265: 99.3961% ( 1) 00:12:44.906 2415.123 - 2427.259: 9[2024-07-25 10:19:34.359796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:44.906 9.4046% ( 1) 00:12:44.906 3980.705 - 4004.978: 99.7618% ( 42) 00:12:44.906 4004.978 - 4029.250: 99.9915% ( 27) 00:12:44.906 5000.154 - 5024.427: 100.0000% ( 1) 00:12:44.906 00:12:44.906 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:44.906 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:44.906 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:44.906 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:44.906 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:45.163 [ 00:12:45.163 { 00:12:45.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.164 "subtype": "Discovery", 00:12:45.164 "listen_addresses": [], 00:12:45.164 "allow_any_host": true, 00:12:45.164 "hosts": [] 00:12:45.164 }, 00:12:45.164 { 00:12:45.164 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:45.164 "subtype": "NVMe", 00:12:45.164 "listen_addresses": [ 00:12:45.164 { 00:12:45.164 "trtype": "VFIOUSER", 00:12:45.164 "adrfam": "IPv4", 00:12:45.164 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:45.164 "trsvcid": "0" 00:12:45.164 } 00:12:45.164 ], 00:12:45.164 "allow_any_host": true, 00:12:45.164 "hosts": [], 00:12:45.164 "serial_number": "SPDK1", 00:12:45.164 "model_number": "SPDK bdev Controller", 00:12:45.164 "max_namespaces": 32, 00:12:45.164 "min_cntlid": 1, 00:12:45.164 "max_cntlid": 65519, 00:12:45.164 "namespaces": [ 00:12:45.164 { 00:12:45.164 "nsid": 1, 00:12:45.164 "bdev_name": "Malloc1", 00:12:45.164 "name": "Malloc1", 00:12:45.164 "nguid": "D6CC3CDEFABA477FAA975D235D8EF32D", 00:12:45.164 "uuid": "d6cc3cde-faba-477f-aa97-5d235d8ef32d" 00:12:45.164 }, 00:12:45.164 { 00:12:45.164 "nsid": 2, 00:12:45.164 "bdev_name": "Malloc3", 00:12:45.164 "name": "Malloc3", 00:12:45.164 "nguid": "099CED6410FA4B679C689693B5A0090F", 00:12:45.164 "uuid": "099ced64-10fa-4b67-9c68-9693b5a0090f" 00:12:45.164 } 00:12:45.164 ] 00:12:45.164 }, 00:12:45.164 { 00:12:45.164 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:45.164 "subtype": "NVMe", 00:12:45.164 "listen_addresses": [ 00:12:45.164 { 00:12:45.164 "trtype": "VFIOUSER", 00:12:45.164 "adrfam": "IPv4", 00:12:45.164 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:45.164 "trsvcid": "0" 00:12:45.164 } 00:12:45.164 ], 00:12:45.164 "allow_any_host": true, 00:12:45.164 "hosts": [], 00:12:45.164 "serial_number": "SPDK2", 00:12:45.164 "model_number": "SPDK bdev Controller", 00:12:45.164 "max_namespaces": 32, 00:12:45.164 "min_cntlid": 1, 00:12:45.164 "max_cntlid": 65519, 00:12:45.164 "namespaces": [ 00:12:45.164 { 00:12:45.164 "nsid": 1, 00:12:45.164 "bdev_name": "Malloc2", 00:12:45.164 "name": "Malloc2", 00:12:45.164 "nguid": "E3969BDEE18B47328DBD71D87C2BAFA6", 00:12:45.164 "uuid": "e3969bde-e18b-4732-8dbd-71d87c2bafa6" 00:12:45.164 } 00:12:45.164 ] 00:12:45.164 } 00:12:45.164 ] 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1487053 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:45.164 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:45.164 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.164 [2024-07-25 10:19:34.890046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.421 Malloc4 00:12:45.421 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:45.679 [2024-07-25 10:19:35.341329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.679 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:45.679 Asynchronous Event Request test 00:12:45.679 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.679 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.679 Registering asynchronous event callbacks... 00:12:45.679 Starting namespace attribute notice tests for all controllers... 00:12:45.679 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:45.679 aer_cb - Changed Namespace 00:12:45.679 Cleaning up... 00:12:45.939 [ 00:12:45.939 { 00:12:45.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.939 "subtype": "Discovery", 00:12:45.939 "listen_addresses": [], 00:12:45.939 "allow_any_host": true, 00:12:45.939 "hosts": [] 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:45.939 "subtype": "NVMe", 00:12:45.939 "listen_addresses": [ 00:12:45.939 { 00:12:45.939 "trtype": "VFIOUSER", 00:12:45.939 "adrfam": "IPv4", 00:12:45.939 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:45.939 "trsvcid": "0" 00:12:45.939 } 00:12:45.939 ], 00:12:45.939 "allow_any_host": true, 00:12:45.939 "hosts": [], 00:12:45.939 "serial_number": "SPDK1", 00:12:45.939 "model_number": "SPDK bdev Controller", 00:12:45.939 "max_namespaces": 32, 00:12:45.939 "min_cntlid": 1, 00:12:45.939 "max_cntlid": 65519, 00:12:45.939 "namespaces": [ 00:12:45.939 { 00:12:45.939 "nsid": 1, 00:12:45.939 "bdev_name": "Malloc1", 00:12:45.939 "name": "Malloc1", 00:12:45.939 "nguid": "D6CC3CDEFABA477FAA975D235D8EF32D", 00:12:45.939 "uuid": "d6cc3cde-faba-477f-aa97-5d235d8ef32d" 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "nsid": 2, 00:12:45.939 "bdev_name": "Malloc3", 00:12:45.939 "name": "Malloc3", 00:12:45.939 "nguid": "099CED6410FA4B679C689693B5A0090F", 00:12:45.939 "uuid": "099ced64-10fa-4b67-9c68-9693b5a0090f" 00:12:45.939 } 00:12:45.939 ] 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:45.939 "subtype": "NVMe", 00:12:45.939 "listen_addresses": [ 00:12:45.939 { 00:12:45.939 "trtype": "VFIOUSER", 00:12:45.939 "adrfam": "IPv4", 00:12:45.939 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:45.939 "trsvcid": "0" 00:12:45.939 } 00:12:45.939 ], 00:12:45.939 "allow_any_host": true, 00:12:45.939 "hosts": [], 00:12:45.939 "serial_number": "SPDK2", 00:12:45.939 "model_number": "SPDK bdev Controller", 00:12:45.939 "max_namespaces": 32, 00:12:45.939 "min_cntlid": 1, 00:12:45.939 "max_cntlid": 65519, 00:12:45.939 "namespaces": [ 00:12:45.939 { 00:12:45.939 "nsid": 1, 00:12:45.939 "bdev_name": "Malloc2", 00:12:45.939 "name": "Malloc2", 00:12:45.939 "nguid": "E3969BDEE18B47328DBD71D87C2BAFA6", 00:12:45.939 "uuid": "e3969bde-e18b-4732-8dbd-71d87c2bafa6" 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "nsid": 2, 00:12:45.939 "bdev_name": "Malloc4", 00:12:45.939 "name": "Malloc4", 00:12:45.939 "nguid": "E5C484C204FB4D24BC9BFB70282E64F9", 00:12:45.939 "uuid": "e5c484c2-04fb-4d24-bc9b-fb70282e64f9" 00:12:45.939 } 00:12:45.939 ] 00:12:45.939 } 00:12:45.939 ] 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1487053 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1482705 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1482705 ']' 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1482705 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1482705 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1482705' 00:12:45.939 killing process with pid 1482705 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1482705 00:12:45.939 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1482705 00:12:46.199 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:46.199 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:46.199 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1487169 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1487169' 00:12:46.200 Process pid: 1487169 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1487169 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1487169 ']' 00:12:46.200 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.459 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.459 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.459 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.459 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:46.459 [2024-07-25 10:19:36.024653] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:46.459 [2024-07-25 10:19:36.025876] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:12:46.459 [2024-07-25 10:19:36.025943] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.459 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.459 [2024-07-25 10:19:36.087310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.459 [2024-07-25 10:19:36.204601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.459 [2024-07-25 10:19:36.204656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.459 [2024-07-25 10:19:36.204673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.459 [2024-07-25 10:19:36.204687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.459 [2024-07-25 10:19:36.204699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.459 [2024-07-25 10:19:36.204782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.459 [2024-07-25 10:19:36.204866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.459 [2024-07-25 10:19:36.204921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.459 [2024-07-25 10:19:36.204924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.717 [2024-07-25 10:19:36.290764] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:46.717 [2024-07-25 10:19:36.290986] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:46.717 [2024-07-25 10:19:36.291243] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:46.717 [2024-07-25 10:19:36.291746] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:46.717 [2024-07-25 10:19:36.292006] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:46.717 10:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.717 10:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:46.717 10:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:47.655 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:47.913 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:47.913 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:47.913 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.913 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:47.913 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:48.173 Malloc1 00:12:48.173 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:48.742 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:49.001 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:49.259 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.260 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:49.260 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:49.519 Malloc2 00:12:49.519 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:49.777 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:50.035 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1487169 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1487169 ']' 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1487169 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487169 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.295 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.296 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487169' 00:12:50.296 killing process with pid 1487169 00:12:50.296 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1487169 00:12:50.296 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1487169 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:50.556 00:12:50.556 real 0m53.417s 00:12:50.556 user 3m31.074s 00:12:50.556 sys 0m4.235s 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:50.556 ************************************ 00:12:50.556 END TEST nvmf_vfio_user 00:12:50.556 ************************************ 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.556 ************************************ 00:12:50.556 START TEST nvmf_vfio_user_nvme_compliance 00:12:50.556 ************************************ 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:50.556 * Looking for test storage... 00:12:50.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1487642 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1487642' 00:12:50.556 Process pid: 1487642 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1487642 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1487642 ']' 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.556 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:50.816 [2024-07-25 10:19:40.362328] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:12:50.816 [2024-07-25 10:19:40.362427] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.816 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.816 [2024-07-25 10:19:40.425398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.816 [2024-07-25 10:19:40.541635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.816 [2024-07-25 10:19:40.541698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.816 [2024-07-25 10:19:40.541715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.816 [2024-07-25 10:19:40.541729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.816 [2024-07-25 10:19:40.541742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.816 [2024-07-25 10:19:40.541832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.816 [2024-07-25 10:19:40.541884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.816 [2024-07-25 10:19:40.541888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.075 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.075 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:12:51.075 10:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:52.015 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 malloc0 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.016 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:52.016 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.276 00:12:52.276 00:12:52.276 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.276 http://cunit.sourceforge.net/ 00:12:52.276 00:12:52.276 00:12:52.276 Suite: nvme_compliance 00:12:52.276 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 10:19:41.882034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.276 [2024-07-25 10:19:41.883528] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:52.276 [2024-07-25 10:19:41.883560] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:52.276 [2024-07-25 10:19:41.883575] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:52.276 [2024-07-25 10:19:41.885048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.276 passed 00:12:52.276 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 10:19:41.993795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.276 [2024-07-25 10:19:41.997817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.276 passed 00:12:52.537 Test: admin_identify_ns ...[2024-07-25 10:19:42.108516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.537 [2024-07-25 10:19:42.172514] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:52.537 [2024-07-25 10:19:42.180510] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:52.537 [2024-07-25 10:19:42.201650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.537 passed 00:12:52.537 Test: admin_get_features_mandatory_features ...[2024-07-25 10:19:42.301222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.537 [2024-07-25 10:19:42.304247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.798 passed 00:12:52.798 Test: admin_get_features_optional_features ...[2024-07-25 10:19:42.409944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.798 [2024-07-25 10:19:42.412968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.798 passed 00:12:52.798 Test: admin_set_features_number_of_queues ...[2024-07-25 10:19:42.518144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.058 [2024-07-25 10:19:42.622642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.058 passed 00:12:53.058 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 10:19:42.725725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.058 [2024-07-25 10:19:42.729760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.058 passed 00:12:53.058 Test: admin_get_log_page_with_lpo ...[2024-07-25 10:19:42.827458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.317 [2024-07-25 10:19:42.897494] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:53.317 [2024-07-25 10:19:42.910605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.317 passed 00:12:53.317 Test: fabric_property_get ...[2024-07-25 10:19:43.013951] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.317 [2024-07-25 10:19:43.015311] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:53.317 [2024-07-25 10:19:43.016973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.317 passed 00:12:53.577 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 10:19:43.119645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.577 [2024-07-25 10:19:43.120983] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:53.577 [2024-07-25 10:19:43.123667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.577 passed 00:12:53.577 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 10:19:43.228474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.577 [2024-07-25 10:19:43.315499] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:53.577 [2024-07-25 10:19:43.331513] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:53.577 [2024-07-25 10:19:43.336634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.836 passed 00:12:53.836 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 10:19:43.437202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.836 [2024-07-25 10:19:43.438558] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:53.836 [2024-07-25 10:19:43.440233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.836 passed 00:12:53.836 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 10:19:43.546392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.095 [2024-07-25 10:19:43.619506] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:54.095 [2024-07-25 10:19:43.643504] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:54.095 [2024-07-25 10:19:43.648639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.095 passed 00:12:54.095 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 10:19:43.750111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.095 [2024-07-25 10:19:43.753822] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:54.095 [2024-07-25 10:19:43.753867] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:54.095 [2024-07-25 10:19:43.755168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.095 passed 00:12:54.095 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 10:19:43.854276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.356 [2024-07-25 10:19:43.945507] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:54.356 [2024-07-25 10:19:43.953489] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:54.356 [2024-07-25 10:19:43.961488] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:54.356 [2024-07-25 10:19:43.969512] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:54.356 [2024-07-25 10:19:43.998631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.356 passed 00:12:54.356 Test: admin_create_io_sq_verify_pc ...[2024-07-25 10:19:44.103130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.356 [2024-07-25 10:19:44.119522] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:54.615 [2024-07-25 10:19:44.136669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.615 passed 00:12:54.615 Test: admin_create_io_qp_max_qps ...[2024-07-25 10:19:44.237325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.996 [2024-07-25 10:19:45.335520] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:55.996 [2024-07-25 10:19:45.733718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.256 passed 00:12:56.256 Test: admin_create_io_sq_shared_cq ...[2024-07-25 10:19:45.833951] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.256 [2024-07-25 10:19:45.966530] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:56.256 [2024-07-25 10:19:46.002598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.518 passed 00:12:56.518 00:12:56.518 Run Summary: Type Total Ran Passed Failed Inactive 00:12:56.518 suites 1 1 n/a 0 0 00:12:56.518 tests 18 18 18 0 0 00:12:56.518 asserts 360 360 360 0 n/a 00:12:56.518 00:12:56.518 Elapsed time = 1.746 seconds 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1487642 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1487642 ']' 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1487642 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487642 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487642' 00:12:56.518 killing process with pid 1487642 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1487642 00:12:56.518 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1487642 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:56.779 00:12:56.779 real 0m6.090s 00:12:56.779 user 0m17.124s 00:12:56.779 sys 0m0.514s 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 ************************************ 00:12:56.779 END TEST nvmf_vfio_user_nvme_compliance 00:12:56.779 ************************************ 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 ************************************ 00:12:56.779 START TEST nvmf_vfio_user_fuzz 00:12:56.779 ************************************ 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:56.779 * Looking for test storage... 00:12:56.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.779 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1488291 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1488291' 00:12:56.780 Process pid: 1488291 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1488291 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1488291 ']' 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.780 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.043 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.043 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:57.043 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 malloc0 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:58.457 10:19:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:30.531 Fuzzing completed. Shutting down the fuzz application 00:13:30.531 00:13:30.531 Dumping successful admin opcodes: 00:13:30.531 8, 9, 10, 24, 00:13:30.531 Dumping successful io opcodes: 00:13:30.531 0, 00:13:30.531 NS: 0x200003a1ef00 I/O qp, Total commands completed: 585641, total successful commands: 2256, random_seed: 1566098048 00:13:30.531 NS: 0x200003a1ef00 admin qp, Total commands completed: 84339, total successful commands: 671, random_seed: 853010048 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1488291 ']' 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1488291' 00:13:30.531 killing process with pid 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1488291 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:30.531 00:13:30.531 real 0m32.283s 00:13:30.531 user 0m32.206s 00:13:30.531 sys 0m28.074s 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:30.531 ************************************ 00:13:30.531 END TEST nvmf_vfio_user_fuzz 00:13:30.531 ************************************ 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.531 ************************************ 00:13:30.531 START TEST nvmf_auth_target 00:13:30.531 ************************************ 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:30.531 * Looking for test storage... 00:13:30.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.531 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.532 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.791 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:30.792 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:30.792 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:30.792 Found net devices under 0000:08:00.0: cvl_0_0 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:30.792 Found net devices under 0000:08:00.1: cvl_0_1 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.792 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:31.053 00:13:31.053 --- 10.0.0.2 ping statistics --- 00:13:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.053 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:13:31.053 00:13:31.053 --- 10.0.0.1 ping statistics --- 00:13:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.053 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1492464 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1492464 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1492464 ']' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.053 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.312 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.312 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:31.312 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.312 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.312 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1492553 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53883e371c0269f6fae4aaf5cc90bda3ae0b0eceba865a1a 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uL5 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53883e371c0269f6fae4aaf5cc90bda3ae0b0eceba865a1a 0 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53883e371c0269f6fae4aaf5cc90bda3ae0b0eceba865a1a 0 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53883e371c0269f6fae4aaf5cc90bda3ae0b0eceba865a1a 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uL5 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uL5 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uL5 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0aeed748c7b72807ae0e2dd87e4213b488cc769f909bb16fcaa5925ea02a85c5 00:13:31.312 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ur4 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0aeed748c7b72807ae0e2dd87e4213b488cc769f909bb16fcaa5925ea02a85c5 3 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0aeed748c7b72807ae0e2dd87e4213b488cc769f909bb16fcaa5925ea02a85c5 3 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0aeed748c7b72807ae0e2dd87e4213b488cc769f909bb16fcaa5925ea02a85c5 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ur4 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ur4 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ur4 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e5456e470db2a2fe8addec67b56351d2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IwW 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e5456e470db2a2fe8addec67b56351d2 1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e5456e470db2a2fe8addec67b56351d2 1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e5456e470db2a2fe8addec67b56351d2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IwW 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IwW 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.IwW 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dd012aa69fe1199903d27757295a04dc99f4b2560290fd7f 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gzK 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dd012aa69fe1199903d27757295a04dc99f4b2560290fd7f 2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dd012aa69fe1199903d27757295a04dc99f4b2560290fd7f 2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dd012aa69fe1199903d27757295a04dc99f4b2560290fd7f 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gzK 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gzK 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.gzK 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ae06af7a834d04148ce052883f3908f464fbf6430db414f 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Gi1 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ae06af7a834d04148ce052883f3908f464fbf6430db414f 2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ae06af7a834d04148ce052883f3908f464fbf6430db414f 2 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.571 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ae06af7a834d04148ce052883f3908f464fbf6430db414f 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Gi1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Gi1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Gi1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e1f8115f20bc177cba4c8e2e4224d524 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xpK 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e1f8115f20bc177cba4c8e2e4224d524 1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e1f8115f20bc177cba4c8e2e4224d524 1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e1f8115f20bc177cba4c8e2e4224d524 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:31.572 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xpK 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xpK 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.xpK 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3c76a20d8533398af2a82cf5654a5daeaf70c597a28c6f8b4aee4134b670cd75 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CO0 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3c76a20d8533398af2a82cf5654a5daeaf70c597a28c6f8b4aee4134b670cd75 3 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3c76a20d8533398af2a82cf5654a5daeaf70c597a28c6f8b4aee4134b670cd75 3 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3c76a20d8533398af2a82cf5654a5daeaf70c597a28c6f8b4aee4134b670cd75 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CO0 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CO0 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.CO0 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1492464 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1492464 ']' 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.830 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1492553 /var/tmp/host.sock 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1492553 ']' 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:32.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.089 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uL5 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uL5 00:13:32.347 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uL5 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ur4 ]] 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur4 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur4 00:13:32.606 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur4 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IwW 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IwW 00:13:32.863 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IwW 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.gzK ]] 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gzK 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gzK 00:13:33.121 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gzK 00:13:33.379 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:33.379 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Gi1 00:13:33.379 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.379 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.637 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.637 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Gi1 00:13:33.637 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Gi1 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.xpK ]] 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xpK 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xpK 00:13:33.895 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xpK 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CO0 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CO0 00:13:34.154 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CO0 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:34.412 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.670 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.237 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.237 { 00:13:35.237 "cntlid": 1, 00:13:35.237 "qid": 0, 00:13:35.237 "state": "enabled", 00:13:35.237 "thread": "nvmf_tgt_poll_group_000", 00:13:35.237 "listen_address": { 00:13:35.237 "trtype": "TCP", 00:13:35.237 "adrfam": "IPv4", 00:13:35.237 "traddr": "10.0.0.2", 00:13:35.237 "trsvcid": "4420" 00:13:35.237 }, 00:13:35.237 "peer_address": { 00:13:35.237 "trtype": "TCP", 00:13:35.237 "adrfam": "IPv4", 00:13:35.237 "traddr": "10.0.0.1", 00:13:35.237 "trsvcid": "42396" 00:13:35.237 }, 00:13:35.237 "auth": { 00:13:35.237 "state": "completed", 00:13:35.237 "digest": "sha256", 00:13:35.237 "dhgroup": "null" 00:13:35.237 } 00:13:35.237 } 00:13:35.237 ]' 00:13:35.237 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.494 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.751 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:13:36.688 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:36.947 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.206 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.464 00:13:37.464 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.464 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.464 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.723 { 00:13:37.723 "cntlid": 3, 00:13:37.723 "qid": 0, 00:13:37.723 "state": "enabled", 00:13:37.723 "thread": "nvmf_tgt_poll_group_000", 00:13:37.723 "listen_address": { 00:13:37.723 "trtype": "TCP", 00:13:37.723 "adrfam": "IPv4", 00:13:37.723 "traddr": "10.0.0.2", 00:13:37.723 "trsvcid": "4420" 00:13:37.723 }, 00:13:37.723 "peer_address": { 00:13:37.723 "trtype": "TCP", 00:13:37.723 "adrfam": "IPv4", 00:13:37.723 "traddr": "10.0.0.1", 00:13:37.723 "trsvcid": "42414" 00:13:37.723 }, 00:13:37.723 "auth": { 00:13:37.723 "state": "completed", 00:13:37.723 "digest": "sha256", 00:13:37.723 "dhgroup": "null" 00:13:37.723 } 00:13:37.723 } 00:13:37.723 ]' 00:13:37.723 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.981 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.240 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.618 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.877 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.135 00:13:40.135 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.135 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.135 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.394 { 00:13:40.394 "cntlid": 5, 00:13:40.394 "qid": 0, 00:13:40.394 "state": "enabled", 00:13:40.394 "thread": "nvmf_tgt_poll_group_000", 00:13:40.394 "listen_address": { 00:13:40.394 "trtype": "TCP", 00:13:40.394 "adrfam": "IPv4", 00:13:40.394 "traddr": "10.0.0.2", 00:13:40.394 "trsvcid": "4420" 00:13:40.394 }, 00:13:40.394 "peer_address": { 00:13:40.394 "trtype": "TCP", 00:13:40.394 "adrfam": "IPv4", 00:13:40.394 "traddr": "10.0.0.1", 00:13:40.394 "trsvcid": "42442" 00:13:40.394 }, 00:13:40.394 "auth": { 00:13:40.394 "state": "completed", 00:13:40.394 "digest": "sha256", 00:13:40.394 "dhgroup": "null" 00:13:40.394 } 00:13:40.394 } 00:13:40.394 ]' 00:13:40.394 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.653 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.911 10:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:42.285 10:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.285 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.851 00:13:42.851 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.851 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.851 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.110 { 00:13:43.110 "cntlid": 7, 00:13:43.110 "qid": 0, 00:13:43.110 "state": "enabled", 00:13:43.110 "thread": "nvmf_tgt_poll_group_000", 00:13:43.110 "listen_address": { 00:13:43.110 "trtype": "TCP", 00:13:43.110 "adrfam": "IPv4", 00:13:43.110 "traddr": "10.0.0.2", 00:13:43.110 "trsvcid": "4420" 00:13:43.110 }, 00:13:43.110 "peer_address": { 00:13:43.110 "trtype": "TCP", 00:13:43.110 "adrfam": "IPv4", 00:13:43.110 "traddr": "10.0.0.1", 00:13:43.110 "trsvcid": "42472" 00:13:43.110 }, 00:13:43.110 "auth": { 00:13:43.110 "state": "completed", 00:13:43.110 "digest": "sha256", 00:13:43.110 "dhgroup": "null" 00:13:43.110 } 00:13:43.110 } 00:13:43.110 ]' 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.110 10:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.368 10:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.933 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.498 00:13:45.498 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.498 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.498 10:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.498 { 00:13:45.498 "cntlid": 9, 00:13:45.498 "qid": 0, 00:13:45.498 "state": "enabled", 00:13:45.498 "thread": "nvmf_tgt_poll_group_000", 00:13:45.498 "listen_address": { 00:13:45.498 "trtype": "TCP", 00:13:45.498 "adrfam": "IPv4", 00:13:45.498 "traddr": "10.0.0.2", 00:13:45.498 "trsvcid": "4420" 00:13:45.498 }, 00:13:45.498 "peer_address": { 00:13:45.498 "trtype": "TCP", 00:13:45.498 "adrfam": "IPv4", 00:13:45.498 "traddr": "10.0.0.1", 00:13:45.498 "trsvcid": "51170" 00:13:45.498 }, 00:13:45.498 "auth": { 00:13:45.498 "state": "completed", 00:13:45.498 "digest": "sha256", 00:13:45.498 "dhgroup": "ffdhe2048" 00:13:45.498 } 00:13:45.498 } 00:13:45.498 ]' 00:13:45.498 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.756 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.013 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.385 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.385 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.951 00:13:47.951 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.951 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.951 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.209 { 00:13:48.209 "cntlid": 11, 00:13:48.209 "qid": 0, 00:13:48.209 "state": "enabled", 00:13:48.209 "thread": "nvmf_tgt_poll_group_000", 00:13:48.209 "listen_address": { 00:13:48.209 "trtype": "TCP", 00:13:48.209 "adrfam": "IPv4", 00:13:48.209 "traddr": "10.0.0.2", 00:13:48.209 "trsvcid": "4420" 00:13:48.209 }, 00:13:48.209 "peer_address": { 00:13:48.209 "trtype": "TCP", 00:13:48.209 "adrfam": "IPv4", 00:13:48.209 "traddr": "10.0.0.1", 00:13:48.209 "trsvcid": "51194" 00:13:48.209 }, 00:13:48.209 "auth": { 00:13:48.209 "state": "completed", 00:13:48.209 "digest": "sha256", 00:13:48.209 "dhgroup": "ffdhe2048" 00:13:48.209 } 00:13:48.209 } 00:13:48.209 ]' 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.209 10:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.468 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:49.844 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.135 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.394 00:13:50.394 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.394 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.394 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.652 { 00:13:50.652 "cntlid": 13, 00:13:50.652 "qid": 0, 00:13:50.652 "state": "enabled", 00:13:50.652 "thread": "nvmf_tgt_poll_group_000", 00:13:50.652 "listen_address": { 00:13:50.652 "trtype": "TCP", 00:13:50.652 "adrfam": "IPv4", 00:13:50.652 "traddr": "10.0.0.2", 00:13:50.652 "trsvcid": "4420" 00:13:50.652 }, 00:13:50.652 "peer_address": { 00:13:50.652 "trtype": "TCP", 00:13:50.652 "adrfam": "IPv4", 00:13:50.652 "traddr": "10.0.0.1", 00:13:50.652 "trsvcid": "51226" 00:13:50.652 }, 00:13:50.652 "auth": { 00:13:50.652 "state": "completed", 00:13:50.652 "digest": "sha256", 00:13:50.652 "dhgroup": "ffdhe2048" 00:13:50.652 } 00:13:50.652 } 00:13:50.652 ]' 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:50.652 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.911 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.911 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.911 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.169 10:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.550 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.550 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.810 00:13:53.069 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.069 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.069 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.328 { 00:13:53.328 "cntlid": 15, 00:13:53.328 "qid": 0, 00:13:53.328 "state": "enabled", 00:13:53.328 "thread": "nvmf_tgt_poll_group_000", 00:13:53.328 "listen_address": { 00:13:53.328 "trtype": "TCP", 00:13:53.328 "adrfam": "IPv4", 00:13:53.328 "traddr": "10.0.0.2", 00:13:53.328 "trsvcid": "4420" 00:13:53.328 }, 00:13:53.328 "peer_address": { 00:13:53.328 "trtype": "TCP", 00:13:53.328 "adrfam": "IPv4", 00:13:53.328 "traddr": "10.0.0.1", 00:13:53.328 "trsvcid": "51272" 00:13:53.328 }, 00:13:53.328 "auth": { 00:13:53.328 "state": "completed", 00:13:53.328 "digest": "sha256", 00:13:53.328 "dhgroup": "ffdhe2048" 00:13:53.328 } 00:13:53.328 } 00:13:53.328 ]' 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.328 10:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.328 10:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.328 10:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.328 10:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.588 10:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.972 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.231 10:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.488 00:13:55.488 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.489 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.489 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.747 { 00:13:55.747 "cntlid": 17, 00:13:55.747 "qid": 0, 00:13:55.747 "state": "enabled", 00:13:55.747 "thread": "nvmf_tgt_poll_group_000", 00:13:55.747 "listen_address": { 00:13:55.747 "trtype": "TCP", 00:13:55.747 "adrfam": "IPv4", 00:13:55.747 "traddr": "10.0.0.2", 00:13:55.747 "trsvcid": "4420" 00:13:55.747 }, 00:13:55.747 "peer_address": { 00:13:55.747 "trtype": "TCP", 00:13:55.747 "adrfam": "IPv4", 00:13:55.747 "traddr": "10.0.0.1", 00:13:55.747 "trsvcid": "36236" 00:13:55.747 }, 00:13:55.747 "auth": { 00:13:55.747 "state": "completed", 00:13:55.747 "digest": "sha256", 00:13:55.747 "dhgroup": "ffdhe3072" 00:13:55.747 } 00:13:55.747 } 00:13:55.747 ]' 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.747 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.005 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:56.005 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.005 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.005 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.005 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.263 10:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.640 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.207 00:13:58.207 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.207 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.207 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.465 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.465 { 00:13:58.465 "cntlid": 19, 00:13:58.465 "qid": 0, 00:13:58.465 "state": "enabled", 00:13:58.465 "thread": "nvmf_tgt_poll_group_000", 00:13:58.465 "listen_address": { 00:13:58.465 "trtype": "TCP", 00:13:58.465 "adrfam": "IPv4", 00:13:58.465 "traddr": "10.0.0.2", 00:13:58.465 "trsvcid": "4420" 00:13:58.465 }, 00:13:58.465 "peer_address": { 00:13:58.465 "trtype": "TCP", 00:13:58.466 "adrfam": "IPv4", 00:13:58.466 "traddr": "10.0.0.1", 00:13:58.466 "trsvcid": "36254" 00:13:58.466 }, 00:13:58.466 "auth": { 00:13:58.466 "state": "completed", 00:13:58.466 "digest": "sha256", 00:13:58.466 "dhgroup": "ffdhe3072" 00:13:58.466 } 00:13:58.466 } 00:13:58.466 ]' 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.466 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.724 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:00.106 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.365 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.624 00:14:00.624 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.624 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.624 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.882 { 00:14:00.882 "cntlid": 21, 00:14:00.882 "qid": 0, 00:14:00.882 "state": "enabled", 00:14:00.882 "thread": "nvmf_tgt_poll_group_000", 00:14:00.882 "listen_address": { 00:14:00.882 "trtype": "TCP", 00:14:00.882 "adrfam": "IPv4", 00:14:00.882 "traddr": "10.0.0.2", 00:14:00.882 "trsvcid": "4420" 00:14:00.882 }, 00:14:00.882 "peer_address": { 00:14:00.882 "trtype": "TCP", 00:14:00.882 "adrfam": "IPv4", 00:14:00.882 "traddr": "10.0.0.1", 00:14:00.882 "trsvcid": "36286" 00:14:00.882 }, 00:14:00.882 "auth": { 00:14:00.882 "state": "completed", 00:14:00.882 "digest": "sha256", 00:14:00.882 "dhgroup": "ffdhe3072" 00:14:00.882 } 00:14:00.882 } 00:14:00.882 ]' 00:14:00.882 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.142 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.143 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.402 10:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.782 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.349 00:14:03.349 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.349 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.349 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.607 { 00:14:03.607 "cntlid": 23, 00:14:03.607 "qid": 0, 00:14:03.607 "state": "enabled", 00:14:03.607 "thread": "nvmf_tgt_poll_group_000", 00:14:03.607 "listen_address": { 00:14:03.607 "trtype": "TCP", 00:14:03.607 "adrfam": "IPv4", 00:14:03.607 "traddr": "10.0.0.2", 00:14:03.607 "trsvcid": "4420" 00:14:03.607 }, 00:14:03.607 "peer_address": { 00:14:03.607 "trtype": "TCP", 00:14:03.607 "adrfam": "IPv4", 00:14:03.607 "traddr": "10.0.0.1", 00:14:03.607 "trsvcid": "36324" 00:14:03.607 }, 00:14:03.607 "auth": { 00:14:03.607 "state": "completed", 00:14:03.607 "digest": "sha256", 00:14:03.607 "dhgroup": "ffdhe3072" 00:14:03.607 } 00:14:03.607 } 00:14:03.607 ]' 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.607 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.177 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.170 10:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.430 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.000 00:14:06.000 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.000 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.000 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.259 { 00:14:06.259 "cntlid": 25, 00:14:06.259 "qid": 0, 00:14:06.259 "state": "enabled", 00:14:06.259 "thread": "nvmf_tgt_poll_group_000", 00:14:06.259 "listen_address": { 00:14:06.259 "trtype": "TCP", 00:14:06.259 "adrfam": "IPv4", 00:14:06.259 "traddr": "10.0.0.2", 00:14:06.259 "trsvcid": "4420" 00:14:06.259 }, 00:14:06.259 "peer_address": { 00:14:06.259 "trtype": "TCP", 00:14:06.259 "adrfam": "IPv4", 00:14:06.259 "traddr": "10.0.0.1", 00:14:06.259 "trsvcid": "45406" 00:14:06.259 }, 00:14:06.259 "auth": { 00:14:06.259 "state": "completed", 00:14:06.259 "digest": "sha256", 00:14:06.259 "dhgroup": "ffdhe4096" 00:14:06.259 } 00:14:06.259 } 00:14:06.259 ]' 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.259 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.259 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.259 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.259 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.831 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:07.769 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.340 10:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.600 00:14:08.600 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.600 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.600 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.858 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.858 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.858 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.858 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.859 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.859 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.859 { 00:14:08.859 "cntlid": 27, 00:14:08.859 "qid": 0, 00:14:08.859 "state": "enabled", 00:14:08.859 "thread": "nvmf_tgt_poll_group_000", 00:14:08.859 "listen_address": { 00:14:08.859 "trtype": "TCP", 00:14:08.859 "adrfam": "IPv4", 00:14:08.859 "traddr": "10.0.0.2", 00:14:08.859 "trsvcid": "4420" 00:14:08.859 }, 00:14:08.859 "peer_address": { 00:14:08.859 "trtype": "TCP", 00:14:08.859 "adrfam": "IPv4", 00:14:08.859 "traddr": "10.0.0.1", 00:14:08.859 "trsvcid": "45426" 00:14:08.859 }, 00:14:08.859 "auth": { 00:14:08.859 "state": "completed", 00:14:08.859 "digest": "sha256", 00:14:08.859 "dhgroup": "ffdhe4096" 00:14:08.859 } 00:14:08.859 } 00:14:08.859 ]' 00:14:08.859 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.859 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.859 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.117 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:09.117 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.117 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.117 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.117 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.376 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.755 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.323 00:14:11.323 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.323 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.323 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.582 { 00:14:11.582 "cntlid": 29, 00:14:11.582 "qid": 0, 00:14:11.582 "state": "enabled", 00:14:11.582 "thread": "nvmf_tgt_poll_group_000", 00:14:11.582 "listen_address": { 00:14:11.582 "trtype": "TCP", 00:14:11.582 "adrfam": "IPv4", 00:14:11.582 "traddr": "10.0.0.2", 00:14:11.582 "trsvcid": "4420" 00:14:11.582 }, 00:14:11.582 "peer_address": { 00:14:11.582 "trtype": "TCP", 00:14:11.582 "adrfam": "IPv4", 00:14:11.582 "traddr": "10.0.0.1", 00:14:11.582 "trsvcid": "45450" 00:14:11.582 }, 00:14:11.582 "auth": { 00:14:11.582 "state": "completed", 00:14:11.582 "digest": "sha256", 00:14:11.582 "dhgroup": "ffdhe4096" 00:14:11.582 } 00:14:11.582 } 00:14:11.582 ]' 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.582 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.841 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.841 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.841 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.100 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.480 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:13.480 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.481 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.481 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.481 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.481 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.060 00:14:14.060 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.060 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.060 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.318 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.318 { 00:14:14.318 "cntlid": 31, 00:14:14.318 "qid": 0, 00:14:14.318 "state": "enabled", 00:14:14.318 "thread": "nvmf_tgt_poll_group_000", 00:14:14.318 "listen_address": { 00:14:14.318 "trtype": "TCP", 00:14:14.318 "adrfam": "IPv4", 00:14:14.318 "traddr": "10.0.0.2", 00:14:14.318 "trsvcid": "4420" 00:14:14.319 }, 00:14:14.319 "peer_address": { 00:14:14.319 "trtype": "TCP", 00:14:14.319 "adrfam": "IPv4", 00:14:14.319 "traddr": "10.0.0.1", 00:14:14.319 "trsvcid": "45472" 00:14:14.319 }, 00:14:14.319 "auth": { 00:14:14.319 "state": "completed", 00:14:14.319 "digest": "sha256", 00:14:14.319 "dhgroup": "ffdhe4096" 00:14:14.319 } 00:14:14.319 } 00:14:14.319 ]' 00:14:14.319 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.319 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.319 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.319 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:14.319 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.319 10:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.319 10:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.319 10:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.577 10:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:15.958 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.218 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.842 00:14:16.842 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.842 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.842 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.126 { 00:14:17.126 "cntlid": 33, 00:14:17.126 "qid": 0, 00:14:17.126 "state": "enabled", 00:14:17.126 "thread": "nvmf_tgt_poll_group_000", 00:14:17.126 "listen_address": { 00:14:17.126 "trtype": "TCP", 00:14:17.126 "adrfam": "IPv4", 00:14:17.126 "traddr": "10.0.0.2", 00:14:17.126 "trsvcid": "4420" 00:14:17.126 }, 00:14:17.126 "peer_address": { 00:14:17.126 "trtype": "TCP", 00:14:17.126 "adrfam": "IPv4", 00:14:17.126 "traddr": "10.0.0.1", 00:14:17.126 "trsvcid": "33454" 00:14:17.126 }, 00:14:17.126 "auth": { 00:14:17.126 "state": "completed", 00:14:17.126 "digest": "sha256", 00:14:17.126 "dhgroup": "ffdhe6144" 00:14:17.126 } 00:14:17.126 } 00:14:17.126 ]' 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.126 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.691 10:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:18.625 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.189 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.753 00:14:19.753 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.753 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.753 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.011 { 00:14:20.011 "cntlid": 35, 00:14:20.011 "qid": 0, 00:14:20.011 "state": "enabled", 00:14:20.011 "thread": "nvmf_tgt_poll_group_000", 00:14:20.011 "listen_address": { 00:14:20.011 "trtype": "TCP", 00:14:20.011 "adrfam": "IPv4", 00:14:20.011 "traddr": "10.0.0.2", 00:14:20.011 "trsvcid": "4420" 00:14:20.011 }, 00:14:20.011 "peer_address": { 00:14:20.011 "trtype": "TCP", 00:14:20.011 "adrfam": "IPv4", 00:14:20.011 "traddr": "10.0.0.1", 00:14:20.011 "trsvcid": "33476" 00:14:20.011 }, 00:14:20.011 "auth": { 00:14:20.011 "state": "completed", 00:14:20.011 "digest": "sha256", 00:14:20.011 "dhgroup": "ffdhe6144" 00:14:20.011 } 00:14:20.011 } 00:14:20.011 ]' 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.011 10:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.576 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.509 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.766 10:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.698 00:14:22.698 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.698 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.698 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.955 { 00:14:22.955 "cntlid": 37, 00:14:22.955 "qid": 0, 00:14:22.955 "state": "enabled", 00:14:22.955 "thread": "nvmf_tgt_poll_group_000", 00:14:22.955 "listen_address": { 00:14:22.955 "trtype": "TCP", 00:14:22.955 "adrfam": "IPv4", 00:14:22.955 "traddr": "10.0.0.2", 00:14:22.955 "trsvcid": "4420" 00:14:22.955 }, 00:14:22.955 "peer_address": { 00:14:22.955 "trtype": "TCP", 00:14:22.955 "adrfam": "IPv4", 00:14:22.955 "traddr": "10.0.0.1", 00:14:22.955 "trsvcid": "33520" 00:14:22.955 }, 00:14:22.955 "auth": { 00:14:22.955 "state": "completed", 00:14:22.955 "digest": "sha256", 00:14:22.955 "dhgroup": "ffdhe6144" 00:14:22.955 } 00:14:22.955 } 00:14:22.955 ]' 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.955 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.212 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:24.584 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.842 10:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.407 00:14:25.407 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.407 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.407 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.665 { 00:14:25.665 "cntlid": 39, 00:14:25.665 "qid": 0, 00:14:25.665 "state": "enabled", 00:14:25.665 "thread": "nvmf_tgt_poll_group_000", 00:14:25.665 "listen_address": { 00:14:25.665 "trtype": "TCP", 00:14:25.665 "adrfam": "IPv4", 00:14:25.665 "traddr": "10.0.0.2", 00:14:25.665 "trsvcid": "4420" 00:14:25.665 }, 00:14:25.665 "peer_address": { 00:14:25.665 "trtype": "TCP", 00:14:25.665 "adrfam": "IPv4", 00:14:25.665 "traddr": "10.0.0.1", 00:14:25.665 "trsvcid": "48126" 00:14:25.665 }, 00:14:25.665 "auth": { 00:14:25.665 "state": "completed", 00:14:25.665 "digest": "sha256", 00:14:25.665 "dhgroup": "ffdhe6144" 00:14:25.665 } 00:14:25.665 } 00:14:25.665 ]' 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.665 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.923 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:25.923 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.923 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.923 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.923 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.181 10:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.555 10:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.555 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.928 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.928 { 00:14:28.928 "cntlid": 41, 00:14:28.928 "qid": 0, 00:14:28.928 "state": "enabled", 00:14:28.928 "thread": "nvmf_tgt_poll_group_000", 00:14:28.928 "listen_address": { 00:14:28.928 "trtype": "TCP", 00:14:28.928 "adrfam": "IPv4", 00:14:28.928 "traddr": "10.0.0.2", 00:14:28.928 "trsvcid": "4420" 00:14:28.928 }, 00:14:28.928 "peer_address": { 00:14:28.928 "trtype": "TCP", 00:14:28.928 "adrfam": "IPv4", 00:14:28.928 "traddr": "10.0.0.1", 00:14:28.928 "trsvcid": "48156" 00:14:28.928 }, 00:14:28.928 "auth": { 00:14:28.928 "state": "completed", 00:14:28.928 "digest": "sha256", 00:14:28.928 "dhgroup": "ffdhe8192" 00:14:28.928 } 00:14:28.928 } 00:14:28.928 ]' 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.928 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.186 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.186 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.186 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.186 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.186 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.443 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.816 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.189 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.190 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.190 { 00:14:32.190 "cntlid": 43, 00:14:32.190 "qid": 0, 00:14:32.190 "state": "enabled", 00:14:32.190 "thread": "nvmf_tgt_poll_group_000", 00:14:32.190 "listen_address": { 00:14:32.190 "trtype": "TCP", 00:14:32.190 "adrfam": "IPv4", 00:14:32.190 "traddr": "10.0.0.2", 00:14:32.190 "trsvcid": "4420" 00:14:32.190 }, 00:14:32.190 "peer_address": { 00:14:32.190 "trtype": "TCP", 00:14:32.190 "adrfam": "IPv4", 00:14:32.190 "traddr": "10.0.0.1", 00:14:32.190 "trsvcid": "48176" 00:14:32.190 }, 00:14:32.190 "auth": { 00:14:32.190 "state": "completed", 00:14:32.190 "digest": "sha256", 00:14:32.190 "dhgroup": "ffdhe8192" 00:14:32.190 } 00:14:32.190 } 00:14:32.190 ]' 00:14:32.190 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.190 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.447 10:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.447 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:32.447 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.447 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.447 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.447 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.705 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:34.076 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.077 10:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.450 00:14:35.450 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.450 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.450 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.450 { 00:14:35.450 "cntlid": 45, 00:14:35.450 "qid": 0, 00:14:35.450 "state": "enabled", 00:14:35.450 "thread": "nvmf_tgt_poll_group_000", 00:14:35.450 "listen_address": { 00:14:35.450 "trtype": "TCP", 00:14:35.450 "adrfam": "IPv4", 00:14:35.450 "traddr": "10.0.0.2", 00:14:35.450 "trsvcid": "4420" 00:14:35.450 }, 00:14:35.450 "peer_address": { 00:14:35.450 "trtype": "TCP", 00:14:35.450 "adrfam": "IPv4", 00:14:35.450 "traddr": "10.0.0.1", 00:14:35.450 "trsvcid": "48206" 00:14:35.450 }, 00:14:35.450 "auth": { 00:14:35.450 "state": "completed", 00:14:35.450 "digest": "sha256", 00:14:35.450 "dhgroup": "ffdhe8192" 00:14:35.450 } 00:14:35.450 } 00:14:35.450 ]' 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.450 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.708 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.708 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.708 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.708 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.965 10:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.339 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.339 10:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.714 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.714 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.715 { 00:14:38.715 "cntlid": 47, 00:14:38.715 "qid": 0, 00:14:38.715 "state": "enabled", 00:14:38.715 "thread": "nvmf_tgt_poll_group_000", 00:14:38.715 "listen_address": { 00:14:38.715 "trtype": "TCP", 00:14:38.715 "adrfam": "IPv4", 00:14:38.715 "traddr": "10.0.0.2", 00:14:38.715 "trsvcid": "4420" 00:14:38.715 }, 00:14:38.715 "peer_address": { 00:14:38.715 "trtype": "TCP", 00:14:38.715 "adrfam": "IPv4", 00:14:38.715 "traddr": "10.0.0.1", 00:14:38.715 "trsvcid": "57684" 00:14:38.715 }, 00:14:38.715 "auth": { 00:14:38.715 "state": "completed", 00:14:38.715 "digest": "sha256", 00:14:38.715 "dhgroup": "ffdhe8192" 00:14:38.715 } 00:14:38.715 } 00:14:38.715 ]' 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.715 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.972 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.972 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.972 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.230 10:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.605 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.606 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.171 00:14:41.171 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.171 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.171 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.428 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.428 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.428 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.428 10:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.428 { 00:14:41.428 "cntlid": 49, 00:14:41.428 "qid": 0, 00:14:41.428 "state": "enabled", 00:14:41.428 "thread": "nvmf_tgt_poll_group_000", 00:14:41.428 "listen_address": { 00:14:41.428 "trtype": "TCP", 00:14:41.428 "adrfam": "IPv4", 00:14:41.428 "traddr": "10.0.0.2", 00:14:41.428 "trsvcid": "4420" 00:14:41.428 }, 00:14:41.428 "peer_address": { 00:14:41.428 "trtype": "TCP", 00:14:41.428 "adrfam": "IPv4", 00:14:41.428 "traddr": "10.0.0.1", 00:14:41.428 "trsvcid": "57706" 00:14:41.428 }, 00:14:41.428 "auth": { 00:14:41.428 "state": "completed", 00:14:41.428 "digest": "sha384", 00:14:41.428 "dhgroup": "null" 00:14:41.428 } 00:14:41.428 } 00:14:41.428 ]' 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.428 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.685 10:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:43.059 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.318 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.318 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.318 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.318 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.910 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.910 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.189 { 00:14:44.189 "cntlid": 51, 00:14:44.189 "qid": 0, 00:14:44.189 "state": "enabled", 00:14:44.189 "thread": "nvmf_tgt_poll_group_000", 00:14:44.189 "listen_address": { 00:14:44.189 "trtype": "TCP", 00:14:44.189 "adrfam": "IPv4", 00:14:44.189 "traddr": "10.0.0.2", 00:14:44.189 "trsvcid": "4420" 00:14:44.189 }, 00:14:44.189 "peer_address": { 00:14:44.189 "trtype": "TCP", 00:14:44.189 "adrfam": "IPv4", 00:14:44.189 "traddr": "10.0.0.1", 00:14:44.189 "trsvcid": "57734" 00:14:44.189 }, 00:14:44.189 "auth": { 00:14:44.189 "state": "completed", 00:14:44.189 "digest": "sha384", 00:14:44.189 "dhgroup": "null" 00:14:44.189 } 00:14:44.189 } 00:14:44.189 ]' 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.189 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.451 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.824 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.825 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.083 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.083 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.083 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.340 00:14:46.340 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.340 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.340 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.598 { 00:14:46.598 "cntlid": 53, 00:14:46.598 "qid": 0, 00:14:46.598 "state": "enabled", 00:14:46.598 "thread": "nvmf_tgt_poll_group_000", 00:14:46.598 "listen_address": { 00:14:46.598 "trtype": "TCP", 00:14:46.598 "adrfam": "IPv4", 00:14:46.598 "traddr": "10.0.0.2", 00:14:46.598 "trsvcid": "4420" 00:14:46.598 }, 00:14:46.598 "peer_address": { 00:14:46.598 "trtype": "TCP", 00:14:46.598 "adrfam": "IPv4", 00:14:46.598 "traddr": "10.0.0.1", 00:14:46.598 "trsvcid": "48924" 00:14:46.598 }, 00:14:46.598 "auth": { 00:14:46.598 "state": "completed", 00:14:46.598 "digest": "sha384", 00:14:46.598 "dhgroup": "null" 00:14:46.598 } 00:14:46.598 } 00:14:46.598 ]' 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:46.598 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.856 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.856 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.856 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.114 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.487 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:48.487 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.488 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.488 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.488 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.488 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.053 00:14:49.053 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.053 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.053 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.311 { 00:14:49.311 "cntlid": 55, 00:14:49.311 "qid": 0, 00:14:49.311 "state": "enabled", 00:14:49.311 "thread": "nvmf_tgt_poll_group_000", 00:14:49.311 "listen_address": { 00:14:49.311 "trtype": "TCP", 00:14:49.311 "adrfam": "IPv4", 00:14:49.311 "traddr": "10.0.0.2", 00:14:49.311 "trsvcid": "4420" 00:14:49.311 }, 00:14:49.311 "peer_address": { 00:14:49.311 "trtype": "TCP", 00:14:49.311 "adrfam": "IPv4", 00:14:49.311 "traddr": "10.0.0.1", 00:14:49.311 "trsvcid": "48958" 00:14:49.311 }, 00:14:49.311 "auth": { 00:14:49.311 "state": "completed", 00:14:49.311 "digest": "sha384", 00:14:49.311 "dhgroup": "null" 00:14:49.311 } 00:14:49.311 } 00:14:49.311 ]' 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.311 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.312 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.312 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.570 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:50.944 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.202 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.460 00:14:51.460 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.460 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.460 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.718 { 00:14:51.718 "cntlid": 57, 00:14:51.718 "qid": 0, 00:14:51.718 "state": "enabled", 00:14:51.718 "thread": "nvmf_tgt_poll_group_000", 00:14:51.718 "listen_address": { 00:14:51.718 "trtype": "TCP", 00:14:51.718 "adrfam": "IPv4", 00:14:51.718 "traddr": "10.0.0.2", 00:14:51.718 "trsvcid": "4420" 00:14:51.718 }, 00:14:51.718 "peer_address": { 00:14:51.718 "trtype": "TCP", 00:14:51.718 "adrfam": "IPv4", 00:14:51.718 "traddr": "10.0.0.1", 00:14:51.718 "trsvcid": "48990" 00:14:51.718 }, 00:14:51.718 "auth": { 00:14:51.718 "state": "completed", 00:14:51.718 "digest": "sha384", 00:14:51.718 "dhgroup": "ffdhe2048" 00:14:51.718 } 00:14:51.718 } 00:14:51.718 ]' 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.718 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.976 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.976 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.976 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.976 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.976 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.234 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.168 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.735 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.993 00:14:53.993 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.993 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.993 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.251 { 00:14:54.251 "cntlid": 59, 00:14:54.251 "qid": 0, 00:14:54.251 "state": "enabled", 00:14:54.251 "thread": "nvmf_tgt_poll_group_000", 00:14:54.251 "listen_address": { 00:14:54.251 "trtype": "TCP", 00:14:54.251 "adrfam": "IPv4", 00:14:54.251 "traddr": "10.0.0.2", 00:14:54.251 "trsvcid": "4420" 00:14:54.251 }, 00:14:54.251 "peer_address": { 00:14:54.251 "trtype": "TCP", 00:14:54.251 "adrfam": "IPv4", 00:14:54.251 "traddr": "10.0.0.1", 00:14:54.251 "trsvcid": "49004" 00:14:54.251 }, 00:14:54.251 "auth": { 00:14:54.251 "state": "completed", 00:14:54.251 "digest": "sha384", 00:14:54.251 "dhgroup": "ffdhe2048" 00:14:54.251 } 00:14:54.251 } 00:14:54.251 ]' 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.251 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.251 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.251 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.508 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.508 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.508 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.766 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.137 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.703 00:14:56.703 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.703 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.703 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.960 { 00:14:56.960 "cntlid": 61, 00:14:56.960 "qid": 0, 00:14:56.960 "state": "enabled", 00:14:56.960 "thread": "nvmf_tgt_poll_group_000", 00:14:56.960 "listen_address": { 00:14:56.960 "trtype": "TCP", 00:14:56.960 "adrfam": "IPv4", 00:14:56.960 "traddr": "10.0.0.2", 00:14:56.960 "trsvcid": "4420" 00:14:56.960 }, 00:14:56.960 "peer_address": { 00:14:56.960 "trtype": "TCP", 00:14:56.960 "adrfam": "IPv4", 00:14:56.960 "traddr": "10.0.0.1", 00:14:56.960 "trsvcid": "43652" 00:14:56.960 }, 00:14:56.960 "auth": { 00:14:56.960 "state": "completed", 00:14:56.960 "digest": "sha384", 00:14:56.960 "dhgroup": "ffdhe2048" 00:14:56.960 } 00:14:56.960 } 00:14:56.960 ]' 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.960 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.217 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:14:58.586 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.587 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.844 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.101 00:14:59.101 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.101 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.101 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.359 { 00:14:59.359 "cntlid": 63, 00:14:59.359 "qid": 0, 00:14:59.359 "state": "enabled", 00:14:59.359 "thread": "nvmf_tgt_poll_group_000", 00:14:59.359 "listen_address": { 00:14:59.359 "trtype": "TCP", 00:14:59.359 "adrfam": "IPv4", 00:14:59.359 "traddr": "10.0.0.2", 00:14:59.359 "trsvcid": "4420" 00:14:59.359 }, 00:14:59.359 "peer_address": { 00:14:59.359 "trtype": "TCP", 00:14:59.359 "adrfam": "IPv4", 00:14:59.359 "traddr": "10.0.0.1", 00:14:59.359 "trsvcid": "43674" 00:14:59.359 }, 00:14:59.359 "auth": { 00:14:59.359 "state": "completed", 00:14:59.359 "digest": "sha384", 00:14:59.359 "dhgroup": "ffdhe2048" 00:14:59.359 } 00:14:59.359 } 00:14:59.359 ]' 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.359 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.616 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.616 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.616 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.616 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.616 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.874 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:00.807 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.065 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.581 00:15:01.581 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.581 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.581 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.839 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.839 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.839 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.839 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.096 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.096 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.096 { 00:15:02.096 "cntlid": 65, 00:15:02.096 "qid": 0, 00:15:02.096 "state": "enabled", 00:15:02.096 "thread": "nvmf_tgt_poll_group_000", 00:15:02.096 "listen_address": { 00:15:02.096 "trtype": "TCP", 00:15:02.096 "adrfam": "IPv4", 00:15:02.096 "traddr": "10.0.0.2", 00:15:02.096 "trsvcid": "4420" 00:15:02.096 }, 00:15:02.096 "peer_address": { 00:15:02.096 "trtype": "TCP", 00:15:02.096 "adrfam": "IPv4", 00:15:02.096 "traddr": "10.0.0.1", 00:15:02.096 "trsvcid": "43694" 00:15:02.096 }, 00:15:02.096 "auth": { 00:15:02.096 "state": "completed", 00:15:02.096 "digest": "sha384", 00:15:02.096 "dhgroup": "ffdhe3072" 00:15:02.096 } 00:15:02.096 } 00:15:02.096 ]' 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.097 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.355 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:03.727 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.728 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.985 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.242 00:15:04.242 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.242 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.242 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.500 { 00:15:04.500 "cntlid": 67, 00:15:04.500 "qid": 0, 00:15:04.500 "state": "enabled", 00:15:04.500 "thread": "nvmf_tgt_poll_group_000", 00:15:04.500 "listen_address": { 00:15:04.500 "trtype": "TCP", 00:15:04.500 "adrfam": "IPv4", 00:15:04.500 "traddr": "10.0.0.2", 00:15:04.500 "trsvcid": "4420" 00:15:04.500 }, 00:15:04.500 "peer_address": { 00:15:04.500 "trtype": "TCP", 00:15:04.500 "adrfam": "IPv4", 00:15:04.500 "traddr": "10.0.0.1", 00:15:04.500 "trsvcid": "43732" 00:15:04.500 }, 00:15:04.500 "auth": { 00:15:04.500 "state": "completed", 00:15:04.500 "digest": "sha384", 00:15:04.500 "dhgroup": "ffdhe3072" 00:15:04.500 } 00:15:04.500 } 00:15:04.500 ]' 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.500 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.758 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.758 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.758 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.758 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.758 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.016 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.390 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.390 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.955 00:15:06.955 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.955 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.955 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.214 { 00:15:07.214 "cntlid": 69, 00:15:07.214 "qid": 0, 00:15:07.214 "state": "enabled", 00:15:07.214 "thread": "nvmf_tgt_poll_group_000", 00:15:07.214 "listen_address": { 00:15:07.214 "trtype": "TCP", 00:15:07.214 "adrfam": "IPv4", 00:15:07.214 "traddr": "10.0.0.2", 00:15:07.214 "trsvcid": "4420" 00:15:07.214 }, 00:15:07.214 "peer_address": { 00:15:07.214 "trtype": "TCP", 00:15:07.214 "adrfam": "IPv4", 00:15:07.214 "traddr": "10.0.0.1", 00:15:07.214 "trsvcid": "47898" 00:15:07.214 }, 00:15:07.214 "auth": { 00:15:07.214 "state": "completed", 00:15:07.214 "digest": "sha384", 00:15:07.214 "dhgroup": "ffdhe3072" 00:15:07.214 } 00:15:07.214 } 00:15:07.214 ]' 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.214 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.779 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.713 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.972 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.539 00:15:09.539 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.539 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.539 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.797 { 00:15:09.797 "cntlid": 71, 00:15:09.797 "qid": 0, 00:15:09.797 "state": "enabled", 00:15:09.797 "thread": "nvmf_tgt_poll_group_000", 00:15:09.797 "listen_address": { 00:15:09.797 "trtype": "TCP", 00:15:09.797 "adrfam": "IPv4", 00:15:09.797 "traddr": "10.0.0.2", 00:15:09.797 "trsvcid": "4420" 00:15:09.797 }, 00:15:09.797 "peer_address": { 00:15:09.797 "trtype": "TCP", 00:15:09.797 "adrfam": "IPv4", 00:15:09.797 "traddr": "10.0.0.1", 00:15:09.797 "trsvcid": "47932" 00:15:09.797 }, 00:15:09.797 "auth": { 00:15:09.797 "state": "completed", 00:15:09.797 "digest": "sha384", 00:15:09.797 "dhgroup": "ffdhe3072" 00:15:09.797 } 00:15:09.797 } 00:15:09.797 ]' 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.797 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.395 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.352 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.610 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.176 00:15:12.176 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.176 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.176 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.434 { 00:15:12.434 "cntlid": 73, 00:15:12.434 "qid": 0, 00:15:12.434 "state": "enabled", 00:15:12.434 "thread": "nvmf_tgt_poll_group_000", 00:15:12.434 "listen_address": { 00:15:12.434 "trtype": "TCP", 00:15:12.434 "adrfam": "IPv4", 00:15:12.434 "traddr": "10.0.0.2", 00:15:12.434 "trsvcid": "4420" 00:15:12.434 }, 00:15:12.434 "peer_address": { 00:15:12.434 "trtype": "TCP", 00:15:12.434 "adrfam": "IPv4", 00:15:12.434 "traddr": "10.0.0.1", 00:15:12.434 "trsvcid": "47968" 00:15:12.434 }, 00:15:12.434 "auth": { 00:15:12.434 "state": "completed", 00:15:12.434 "digest": "sha384", 00:15:12.434 "dhgroup": "ffdhe4096" 00:15:12.434 } 00:15:12.434 } 00:15:12.434 ]' 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.434 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.692 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.692 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.692 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.950 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.323 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.323 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.889 00:15:14.889 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.889 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.889 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.147 { 00:15:15.147 "cntlid": 75, 00:15:15.147 "qid": 0, 00:15:15.147 "state": "enabled", 00:15:15.147 "thread": "nvmf_tgt_poll_group_000", 00:15:15.147 "listen_address": { 00:15:15.147 "trtype": "TCP", 00:15:15.147 "adrfam": "IPv4", 00:15:15.147 "traddr": "10.0.0.2", 00:15:15.147 "trsvcid": "4420" 00:15:15.147 }, 00:15:15.147 "peer_address": { 00:15:15.147 "trtype": "TCP", 00:15:15.147 "adrfam": "IPv4", 00:15:15.147 "traddr": "10.0.0.1", 00:15:15.147 "trsvcid": "48000" 00:15:15.147 }, 00:15:15.147 "auth": { 00:15:15.147 "state": "completed", 00:15:15.147 "digest": "sha384", 00:15:15.147 "dhgroup": "ffdhe4096" 00:15:15.147 } 00:15:15.147 } 00:15:15.147 ]' 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.147 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.405 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.779 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.037 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.295 00:15:17.295 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.295 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.295 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.553 { 00:15:17.553 "cntlid": 77, 00:15:17.553 "qid": 0, 00:15:17.553 "state": "enabled", 00:15:17.553 "thread": "nvmf_tgt_poll_group_000", 00:15:17.553 "listen_address": { 00:15:17.553 "trtype": "TCP", 00:15:17.553 "adrfam": "IPv4", 00:15:17.553 "traddr": "10.0.0.2", 00:15:17.553 "trsvcid": "4420" 00:15:17.553 }, 00:15:17.553 "peer_address": { 00:15:17.553 "trtype": "TCP", 00:15:17.553 "adrfam": "IPv4", 00:15:17.553 "traddr": "10.0.0.1", 00:15:17.553 "trsvcid": "42628" 00:15:17.553 }, 00:15:17.553 "auth": { 00:15:17.553 "state": "completed", 00:15:17.553 "digest": "sha384", 00:15:17.553 "dhgroup": "ffdhe4096" 00:15:17.553 } 00:15:17.553 } 00:15:17.553 ]' 00:15:17.553 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.811 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.068 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:15:19.441 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.441 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:19.441 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.441 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.441 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.442 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.442 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.442 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.442 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.008 00:15:20.008 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.008 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.008 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.266 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.267 { 00:15:20.267 "cntlid": 79, 00:15:20.267 "qid": 0, 00:15:20.267 "state": "enabled", 00:15:20.267 "thread": "nvmf_tgt_poll_group_000", 00:15:20.267 "listen_address": { 00:15:20.267 "trtype": "TCP", 00:15:20.267 "adrfam": "IPv4", 00:15:20.267 "traddr": "10.0.0.2", 00:15:20.267 "trsvcid": "4420" 00:15:20.267 }, 00:15:20.267 "peer_address": { 00:15:20.267 "trtype": "TCP", 00:15:20.267 "adrfam": "IPv4", 00:15:20.267 "traddr": "10.0.0.1", 00:15:20.267 "trsvcid": "42644" 00:15:20.267 }, 00:15:20.267 "auth": { 00:15:20.267 "state": "completed", 00:15:20.267 "digest": "sha384", 00:15:20.267 "dhgroup": "ffdhe4096" 00:15:20.267 } 00:15:20.267 } 00:15:20.267 ]' 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.267 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.267 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.267 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.525 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.525 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.525 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.783 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.155 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.089 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.089 { 00:15:23.089 "cntlid": 81, 00:15:23.089 "qid": 0, 00:15:23.089 "state": "enabled", 00:15:23.089 "thread": "nvmf_tgt_poll_group_000", 00:15:23.089 "listen_address": { 00:15:23.089 "trtype": "TCP", 00:15:23.089 "adrfam": "IPv4", 00:15:23.089 "traddr": "10.0.0.2", 00:15:23.089 "trsvcid": "4420" 00:15:23.089 }, 00:15:23.089 "peer_address": { 00:15:23.089 "trtype": "TCP", 00:15:23.089 "adrfam": "IPv4", 00:15:23.089 "traddr": "10.0.0.1", 00:15:23.089 "trsvcid": "42660" 00:15:23.089 }, 00:15:23.089 "auth": { 00:15:23.089 "state": "completed", 00:15:23.089 "digest": "sha384", 00:15:23.089 "dhgroup": "ffdhe6144" 00:15:23.089 } 00:15:23.089 } 00:15:23.089 ]' 00:15:23.089 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.347 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.347 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.347 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:23.348 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.348 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.348 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.348 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.606 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:24.979 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.980 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.913 00:15:25.913 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.913 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.913 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.171 { 00:15:26.171 "cntlid": 83, 00:15:26.171 "qid": 0, 00:15:26.171 "state": "enabled", 00:15:26.171 "thread": "nvmf_tgt_poll_group_000", 00:15:26.171 "listen_address": { 00:15:26.171 "trtype": "TCP", 00:15:26.171 "adrfam": "IPv4", 00:15:26.171 "traddr": "10.0.0.2", 00:15:26.171 "trsvcid": "4420" 00:15:26.171 }, 00:15:26.171 "peer_address": { 00:15:26.171 "trtype": "TCP", 00:15:26.171 "adrfam": "IPv4", 00:15:26.171 "traddr": "10.0.0.1", 00:15:26.171 "trsvcid": "43696" 00:15:26.171 }, 00:15:26.171 "auth": { 00:15:26.171 "state": "completed", 00:15:26.171 "digest": "sha384", 00:15:26.171 "dhgroup": "ffdhe6144" 00:15:26.171 } 00:15:26.171 } 00:15:26.171 ]' 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.171 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.429 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.801 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.059 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.626 00:15:28.626 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.626 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.626 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.884 { 00:15:28.884 "cntlid": 85, 00:15:28.884 "qid": 0, 00:15:28.884 "state": "enabled", 00:15:28.884 "thread": "nvmf_tgt_poll_group_000", 00:15:28.884 "listen_address": { 00:15:28.884 "trtype": "TCP", 00:15:28.884 "adrfam": "IPv4", 00:15:28.884 "traddr": "10.0.0.2", 00:15:28.884 "trsvcid": "4420" 00:15:28.884 }, 00:15:28.884 "peer_address": { 00:15:28.884 "trtype": "TCP", 00:15:28.884 "adrfam": "IPv4", 00:15:28.884 "traddr": "10.0.0.1", 00:15:28.884 "trsvcid": "43718" 00:15:28.884 }, 00:15:28.884 "auth": { 00:15:28.884 "state": "completed", 00:15:28.884 "digest": "sha384", 00:15:28.884 "dhgroup": "ffdhe6144" 00:15:28.884 } 00:15:28.884 } 00:15:28.884 ]' 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.884 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.142 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.142 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.142 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.142 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.142 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.400 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.772 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.030 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.596 00:15:31.596 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.596 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.596 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.854 { 00:15:31.854 "cntlid": 87, 00:15:31.854 "qid": 0, 00:15:31.854 "state": "enabled", 00:15:31.854 "thread": "nvmf_tgt_poll_group_000", 00:15:31.854 "listen_address": { 00:15:31.854 "trtype": "TCP", 00:15:31.854 "adrfam": "IPv4", 00:15:31.854 "traddr": "10.0.0.2", 00:15:31.854 "trsvcid": "4420" 00:15:31.854 }, 00:15:31.854 "peer_address": { 00:15:31.854 "trtype": "TCP", 00:15:31.854 "adrfam": "IPv4", 00:15:31.854 "traddr": "10.0.0.1", 00:15:31.854 "trsvcid": "43746" 00:15:31.854 }, 00:15:31.854 "auth": { 00:15:31.854 "state": "completed", 00:15:31.854 "digest": "sha384", 00:15:31.854 "dhgroup": "ffdhe6144" 00:15:31.854 } 00:15:31.854 } 00:15:31.854 ]' 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.854 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.112 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.112 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.112 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.370 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.741 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.110 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.110 { 00:15:35.110 "cntlid": 89, 00:15:35.110 "qid": 0, 00:15:35.110 "state": "enabled", 00:15:35.110 "thread": "nvmf_tgt_poll_group_000", 00:15:35.110 "listen_address": { 00:15:35.110 "trtype": "TCP", 00:15:35.110 "adrfam": "IPv4", 00:15:35.110 "traddr": "10.0.0.2", 00:15:35.110 "trsvcid": "4420" 00:15:35.110 }, 00:15:35.110 "peer_address": { 00:15:35.110 "trtype": "TCP", 00:15:35.110 "adrfam": "IPv4", 00:15:35.110 "traddr": "10.0.0.1", 00:15:35.110 "trsvcid": "43782" 00:15:35.110 }, 00:15:35.110 "auth": { 00:15:35.110 "state": "completed", 00:15:35.110 "digest": "sha384", 00:15:35.110 "dhgroup": "ffdhe8192" 00:15:35.110 } 00:15:35.110 } 00:15:35.110 ]' 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.110 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.367 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.367 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.367 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.625 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.033 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.992 00:15:37.992 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.992 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.992 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.558 { 00:15:38.558 "cntlid": 91, 00:15:38.558 "qid": 0, 00:15:38.558 "state": "enabled", 00:15:38.558 "thread": "nvmf_tgt_poll_group_000", 00:15:38.558 "listen_address": { 00:15:38.558 "trtype": "TCP", 00:15:38.558 "adrfam": "IPv4", 00:15:38.558 "traddr": "10.0.0.2", 00:15:38.558 "trsvcid": "4420" 00:15:38.558 }, 00:15:38.558 "peer_address": { 00:15:38.558 "trtype": "TCP", 00:15:38.558 "adrfam": "IPv4", 00:15:38.558 "traddr": "10.0.0.1", 00:15:38.558 "trsvcid": "50872" 00:15:38.558 }, 00:15:38.558 "auth": { 00:15:38.558 "state": "completed", 00:15:38.558 "digest": "sha384", 00:15:38.558 "dhgroup": "ffdhe8192" 00:15:38.558 } 00:15:38.558 } 00:15:38.558 ]' 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.558 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.816 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.187 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.446 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.379 00:15:41.379 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.379 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.379 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.637 { 00:15:41.637 "cntlid": 93, 00:15:41.637 "qid": 0, 00:15:41.637 "state": "enabled", 00:15:41.637 "thread": "nvmf_tgt_poll_group_000", 00:15:41.637 "listen_address": { 00:15:41.637 "trtype": "TCP", 00:15:41.637 "adrfam": "IPv4", 00:15:41.637 "traddr": "10.0.0.2", 00:15:41.637 "trsvcid": "4420" 00:15:41.637 }, 00:15:41.637 "peer_address": { 00:15:41.637 "trtype": "TCP", 00:15:41.637 "adrfam": "IPv4", 00:15:41.637 "traddr": "10.0.0.1", 00:15:41.637 "trsvcid": "50896" 00:15:41.637 }, 00:15:41.637 "auth": { 00:15:41.637 "state": "completed", 00:15:41.637 "digest": "sha384", 00:15:41.637 "dhgroup": "ffdhe8192" 00:15:41.637 } 00:15:41.637 } 00:15:41.637 ]' 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.637 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.894 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.894 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.895 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.152 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:15:43.086 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.344 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.602 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.535 00:15:44.535 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.535 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.535 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.794 { 00:15:44.794 "cntlid": 95, 00:15:44.794 "qid": 0, 00:15:44.794 "state": "enabled", 00:15:44.794 "thread": "nvmf_tgt_poll_group_000", 00:15:44.794 "listen_address": { 00:15:44.794 "trtype": "TCP", 00:15:44.794 "adrfam": "IPv4", 00:15:44.794 "traddr": "10.0.0.2", 00:15:44.794 "trsvcid": "4420" 00:15:44.794 }, 00:15:44.794 "peer_address": { 00:15:44.794 "trtype": "TCP", 00:15:44.794 "adrfam": "IPv4", 00:15:44.794 "traddr": "10.0.0.1", 00:15:44.794 "trsvcid": "50926" 00:15:44.794 }, 00:15:44.794 "auth": { 00:15:44.794 "state": "completed", 00:15:44.794 "digest": "sha384", 00:15:44.794 "dhgroup": "ffdhe8192" 00:15:44.794 } 00:15:44.794 } 00:15:44.794 ]' 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.794 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.052 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.052 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.052 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.052 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.052 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.311 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.686 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.252 00:15:47.253 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.253 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.253 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.511 { 00:15:47.511 "cntlid": 97, 00:15:47.511 "qid": 0, 00:15:47.511 "state": "enabled", 00:15:47.511 "thread": "nvmf_tgt_poll_group_000", 00:15:47.511 "listen_address": { 00:15:47.511 "trtype": "TCP", 00:15:47.511 "adrfam": "IPv4", 00:15:47.511 "traddr": "10.0.0.2", 00:15:47.511 "trsvcid": "4420" 00:15:47.511 }, 00:15:47.511 "peer_address": { 00:15:47.511 "trtype": "TCP", 00:15:47.511 "adrfam": "IPv4", 00:15:47.511 "traddr": "10.0.0.1", 00:15:47.511 "trsvcid": "48010" 00:15:47.511 }, 00:15:47.511 "auth": { 00:15:47.511 "state": "completed", 00:15:47.511 "digest": "sha512", 00:15:47.511 "dhgroup": "null" 00:15:47.511 } 00:15:47.511 } 00:15:47.511 ]' 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.511 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.076 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.010 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.576 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.834 00:15:49.834 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.834 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.834 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.092 { 00:15:50.092 "cntlid": 99, 00:15:50.092 "qid": 0, 00:15:50.092 "state": "enabled", 00:15:50.092 "thread": "nvmf_tgt_poll_group_000", 00:15:50.092 "listen_address": { 00:15:50.092 "trtype": "TCP", 00:15:50.092 "adrfam": "IPv4", 00:15:50.092 "traddr": "10.0.0.2", 00:15:50.092 "trsvcid": "4420" 00:15:50.092 }, 00:15:50.092 "peer_address": { 00:15:50.092 "trtype": "TCP", 00:15:50.092 "adrfam": "IPv4", 00:15:50.092 "traddr": "10.0.0.1", 00:15:50.092 "trsvcid": "48036" 00:15:50.092 }, 00:15:50.092 "auth": { 00:15:50.092 "state": "completed", 00:15:50.092 "digest": "sha512", 00:15:50.092 "dhgroup": "null" 00:15:50.092 } 00:15:50.092 } 00:15:50.092 ]' 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.092 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.350 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.723 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.981 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.240 00:15:52.240 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.240 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.240 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.498 { 00:15:52.498 "cntlid": 101, 00:15:52.498 "qid": 0, 00:15:52.498 "state": "enabled", 00:15:52.498 "thread": "nvmf_tgt_poll_group_000", 00:15:52.498 "listen_address": { 00:15:52.498 "trtype": "TCP", 00:15:52.498 "adrfam": "IPv4", 00:15:52.498 "traddr": "10.0.0.2", 00:15:52.498 "trsvcid": "4420" 00:15:52.498 }, 00:15:52.498 "peer_address": { 00:15:52.498 "trtype": "TCP", 00:15:52.498 "adrfam": "IPv4", 00:15:52.498 "traddr": "10.0.0.1", 00:15:52.498 "trsvcid": "48062" 00:15:52.498 }, 00:15:52.498 "auth": { 00:15:52.498 "state": "completed", 00:15:52.498 "digest": "sha512", 00:15:52.498 "dhgroup": "null" 00:15:52.498 } 00:15:52.498 } 00:15:52.498 ]' 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.498 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.756 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.756 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.756 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.756 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.756 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.015 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.387 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.387 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.952 00:15:54.952 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.952 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.953 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.210 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.210 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.210 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.210 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.210 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.211 { 00:15:55.211 "cntlid": 103, 00:15:55.211 "qid": 0, 00:15:55.211 "state": "enabled", 00:15:55.211 "thread": "nvmf_tgt_poll_group_000", 00:15:55.211 "listen_address": { 00:15:55.211 "trtype": "TCP", 00:15:55.211 "adrfam": "IPv4", 00:15:55.211 "traddr": "10.0.0.2", 00:15:55.211 "trsvcid": "4420" 00:15:55.211 }, 00:15:55.211 "peer_address": { 00:15:55.211 "trtype": "TCP", 00:15:55.211 "adrfam": "IPv4", 00:15:55.211 "traddr": "10.0.0.1", 00:15:55.211 "trsvcid": "48082" 00:15:55.211 }, 00:15:55.211 "auth": { 00:15:55.211 "state": "completed", 00:15:55.211 "digest": "sha512", 00:15:55.211 "dhgroup": "null" 00:15:55.211 } 00:15:55.211 } 00:15:55.211 ]' 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.211 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.469 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.843 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.102 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.360 00:15:57.360 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.360 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.360 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.618 { 00:15:57.618 "cntlid": 105, 00:15:57.618 "qid": 0, 00:15:57.618 "state": "enabled", 00:15:57.618 "thread": "nvmf_tgt_poll_group_000", 00:15:57.618 "listen_address": { 00:15:57.618 "trtype": "TCP", 00:15:57.618 "adrfam": "IPv4", 00:15:57.618 "traddr": "10.0.0.2", 00:15:57.618 "trsvcid": "4420" 00:15:57.618 }, 00:15:57.618 "peer_address": { 00:15:57.618 "trtype": "TCP", 00:15:57.618 "adrfam": "IPv4", 00:15:57.618 "traddr": "10.0.0.1", 00:15:57.618 "trsvcid": "47346" 00:15:57.618 }, 00:15:57.618 "auth": { 00:15:57.618 "state": "completed", 00:15:57.618 "digest": "sha512", 00:15:57.618 "dhgroup": "ffdhe2048" 00:15:57.618 } 00:15:57.618 } 00:15:57.618 ]' 00:15:57.618 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.876 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.135 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:15:59.509 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.510 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.510 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.768 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.026 00:16:00.026 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.026 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.026 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.285 { 00:16:00.285 "cntlid": 107, 00:16:00.285 "qid": 0, 00:16:00.285 "state": "enabled", 00:16:00.285 "thread": "nvmf_tgt_poll_group_000", 00:16:00.285 "listen_address": { 00:16:00.285 "trtype": "TCP", 00:16:00.285 "adrfam": "IPv4", 00:16:00.285 "traddr": "10.0.0.2", 00:16:00.285 "trsvcid": "4420" 00:16:00.285 }, 00:16:00.285 "peer_address": { 00:16:00.285 "trtype": "TCP", 00:16:00.285 "adrfam": "IPv4", 00:16:00.285 "traddr": "10.0.0.1", 00:16:00.285 "trsvcid": "47374" 00:16:00.285 }, 00:16:00.285 "auth": { 00:16:00.285 "state": "completed", 00:16:00.285 "digest": "sha512", 00:16:00.285 "dhgroup": "ffdhe2048" 00:16:00.285 } 00:16:00.285 } 00:16:00.285 ]' 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.285 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.285 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.285 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.543 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.543 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.543 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.801 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.176 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.742 00:16:02.742 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.742 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.742 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.000 { 00:16:03.000 "cntlid": 109, 00:16:03.000 "qid": 0, 00:16:03.000 "state": "enabled", 00:16:03.000 "thread": "nvmf_tgt_poll_group_000", 00:16:03.000 "listen_address": { 00:16:03.000 "trtype": "TCP", 00:16:03.000 "adrfam": "IPv4", 00:16:03.000 "traddr": "10.0.0.2", 00:16:03.000 "trsvcid": "4420" 00:16:03.000 }, 00:16:03.000 "peer_address": { 00:16:03.000 "trtype": "TCP", 00:16:03.000 "adrfam": "IPv4", 00:16:03.000 "traddr": "10.0.0.1", 00:16:03.000 "trsvcid": "47412" 00:16:03.000 }, 00:16:03.000 "auth": { 00:16:03.000 "state": "completed", 00:16:03.000 "digest": "sha512", 00:16:03.000 "dhgroup": "ffdhe2048" 00:16:03.000 } 00:16:03.000 } 00:16:03.000 ]' 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.000 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.259 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.658 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.659 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.916 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:04.916 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.916 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.916 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:04.916 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.917 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.175 00:16:05.175 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.175 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.175 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.435 { 00:16:05.435 "cntlid": 111, 00:16:05.435 "qid": 0, 00:16:05.435 "state": "enabled", 00:16:05.435 "thread": "nvmf_tgt_poll_group_000", 00:16:05.435 "listen_address": { 00:16:05.435 "trtype": "TCP", 00:16:05.435 "adrfam": "IPv4", 00:16:05.435 "traddr": "10.0.0.2", 00:16:05.435 "trsvcid": "4420" 00:16:05.435 }, 00:16:05.435 "peer_address": { 00:16:05.435 "trtype": "TCP", 00:16:05.435 "adrfam": "IPv4", 00:16:05.435 "traddr": "10.0.0.1", 00:16:05.435 "trsvcid": "50516" 00:16:05.435 }, 00:16:05.435 "auth": { 00:16:05.435 "state": "completed", 00:16:05.435 "digest": "sha512", 00:16:05.435 "dhgroup": "ffdhe2048" 00:16:05.435 } 00:16:05.435 } 00:16:05.435 ]' 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.435 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.693 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.693 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.693 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.693 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.693 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.952 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.325 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.325 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.891 00:16:07.891 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.891 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.891 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.148 { 00:16:08.148 "cntlid": 113, 00:16:08.148 "qid": 0, 00:16:08.148 "state": "enabled", 00:16:08.148 "thread": "nvmf_tgt_poll_group_000", 00:16:08.148 "listen_address": { 00:16:08.148 "trtype": "TCP", 00:16:08.148 "adrfam": "IPv4", 00:16:08.148 "traddr": "10.0.0.2", 00:16:08.148 "trsvcid": "4420" 00:16:08.148 }, 00:16:08.148 "peer_address": { 00:16:08.148 "trtype": "TCP", 00:16:08.148 "adrfam": "IPv4", 00:16:08.148 "traddr": "10.0.0.1", 00:16:08.148 "trsvcid": "50542" 00:16:08.148 }, 00:16:08.148 "auth": { 00:16:08.148 "state": "completed", 00:16:08.148 "digest": "sha512", 00:16:08.148 "dhgroup": "ffdhe3072" 00:16:08.148 } 00:16:08.148 } 00:16:08.148 ]' 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.148 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.712 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.645 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.211 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.468 00:16:10.468 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.468 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.468 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.726 { 00:16:10.726 "cntlid": 115, 00:16:10.726 "qid": 0, 00:16:10.726 "state": "enabled", 00:16:10.726 "thread": "nvmf_tgt_poll_group_000", 00:16:10.726 "listen_address": { 00:16:10.726 "trtype": "TCP", 00:16:10.726 "adrfam": "IPv4", 00:16:10.726 "traddr": "10.0.0.2", 00:16:10.726 "trsvcid": "4420" 00:16:10.726 }, 00:16:10.726 "peer_address": { 00:16:10.726 "trtype": "TCP", 00:16:10.726 "adrfam": "IPv4", 00:16:10.726 "traddr": "10.0.0.1", 00:16:10.726 "trsvcid": "50570" 00:16:10.726 }, 00:16:10.726 "auth": { 00:16:10.726 "state": "completed", 00:16:10.726 "digest": "sha512", 00:16:10.726 "dhgroup": "ffdhe3072" 00:16:10.726 } 00:16:10.726 } 00:16:10.726 ]' 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.726 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.983 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.983 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.983 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.983 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.983 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.241 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:16:12.613 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.613 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.179 00:16:13.179 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.179 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.179 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.437 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.437 { 00:16:13.437 "cntlid": 117, 00:16:13.437 "qid": 0, 00:16:13.437 "state": "enabled", 00:16:13.437 "thread": "nvmf_tgt_poll_group_000", 00:16:13.437 "listen_address": { 00:16:13.437 "trtype": "TCP", 00:16:13.437 "adrfam": "IPv4", 00:16:13.437 "traddr": "10.0.0.2", 00:16:13.437 "trsvcid": "4420" 00:16:13.437 }, 00:16:13.437 "peer_address": { 00:16:13.437 "trtype": "TCP", 00:16:13.437 "adrfam": "IPv4", 00:16:13.437 "traddr": "10.0.0.1", 00:16:13.437 "trsvcid": "50598" 00:16:13.437 }, 00:16:13.437 "auth": { 00:16:13.437 "state": "completed", 00:16:13.437 "digest": "sha512", 00:16:13.437 "dhgroup": "ffdhe3072" 00:16:13.437 } 00:16:13.437 } 00:16:13.437 ]' 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.438 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.706 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.079 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.080 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.080 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.337 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.338 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.338 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.595 00:16:15.595 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.595 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.595 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.853 { 00:16:15.853 "cntlid": 119, 00:16:15.853 "qid": 0, 00:16:15.853 "state": "enabled", 00:16:15.853 "thread": "nvmf_tgt_poll_group_000", 00:16:15.853 "listen_address": { 00:16:15.853 "trtype": "TCP", 00:16:15.853 "adrfam": "IPv4", 00:16:15.853 "traddr": "10.0.0.2", 00:16:15.853 "trsvcid": "4420" 00:16:15.853 }, 00:16:15.853 "peer_address": { 00:16:15.853 "trtype": "TCP", 00:16:15.853 "adrfam": "IPv4", 00:16:15.853 "traddr": "10.0.0.1", 00:16:15.853 "trsvcid": "40230" 00:16:15.853 }, 00:16:15.853 "auth": { 00:16:15.853 "state": "completed", 00:16:15.853 "digest": "sha512", 00:16:15.853 "dhgroup": "ffdhe3072" 00:16:15.853 } 00:16:15.853 } 00:16:15.853 ]' 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.853 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.110 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.110 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.110 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.110 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.110 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.367 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.741 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.307 00:16:18.307 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.307 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.307 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.565 { 00:16:18.565 "cntlid": 121, 00:16:18.565 "qid": 0, 00:16:18.565 "state": "enabled", 00:16:18.565 "thread": "nvmf_tgt_poll_group_000", 00:16:18.565 "listen_address": { 00:16:18.565 "trtype": "TCP", 00:16:18.565 "adrfam": "IPv4", 00:16:18.565 "traddr": "10.0.0.2", 00:16:18.565 "trsvcid": "4420" 00:16:18.565 }, 00:16:18.565 "peer_address": { 00:16:18.565 "trtype": "TCP", 00:16:18.565 "adrfam": "IPv4", 00:16:18.565 "traddr": "10.0.0.1", 00:16:18.565 "trsvcid": "40248" 00:16:18.565 }, 00:16:18.565 "auth": { 00:16:18.565 "state": "completed", 00:16:18.565 "digest": "sha512", 00:16:18.565 "dhgroup": "ffdhe4096" 00:16:18.565 } 00:16:18.565 } 00:16:18.565 ]' 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.565 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.131 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.064 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.322 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.323 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.888 00:16:20.888 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.888 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.888 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.147 { 00:16:21.147 "cntlid": 123, 00:16:21.147 "qid": 0, 00:16:21.147 "state": "enabled", 00:16:21.147 "thread": "nvmf_tgt_poll_group_000", 00:16:21.147 "listen_address": { 00:16:21.147 "trtype": "TCP", 00:16:21.147 "adrfam": "IPv4", 00:16:21.147 "traddr": "10.0.0.2", 00:16:21.147 "trsvcid": "4420" 00:16:21.147 }, 00:16:21.147 "peer_address": { 00:16:21.147 "trtype": "TCP", 00:16:21.147 "adrfam": "IPv4", 00:16:21.147 "traddr": "10.0.0.1", 00:16:21.147 "trsvcid": "40282" 00:16:21.147 }, 00:16:21.147 "auth": { 00:16:21.147 "state": "completed", 00:16:21.147 "digest": "sha512", 00:16:21.147 "dhgroup": "ffdhe4096" 00:16:21.147 } 00:16:21.147 } 00:16:21.147 ]' 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.147 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.405 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.779 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.037 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.295 00:16:23.295 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.295 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.295 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.859 { 00:16:23.859 "cntlid": 125, 00:16:23.859 "qid": 0, 00:16:23.859 "state": "enabled", 00:16:23.859 "thread": "nvmf_tgt_poll_group_000", 00:16:23.859 "listen_address": { 00:16:23.859 "trtype": "TCP", 00:16:23.859 "adrfam": "IPv4", 00:16:23.859 "traddr": "10.0.0.2", 00:16:23.859 "trsvcid": "4420" 00:16:23.859 }, 00:16:23.859 "peer_address": { 00:16:23.859 "trtype": "TCP", 00:16:23.859 "adrfam": "IPv4", 00:16:23.859 "traddr": "10.0.0.1", 00:16:23.859 "trsvcid": "40312" 00:16:23.859 }, 00:16:23.859 "auth": { 00:16:23.859 "state": "completed", 00:16:23.859 "digest": "sha512", 00:16:23.859 "dhgroup": "ffdhe4096" 00:16:23.859 } 00:16:23.859 } 00:16:23.859 ]' 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.859 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.116 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.487 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.745 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.309 00:16:26.309 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.309 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.309 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.309 { 00:16:26.309 "cntlid": 127, 00:16:26.309 "qid": 0, 00:16:26.309 "state": "enabled", 00:16:26.309 "thread": "nvmf_tgt_poll_group_000", 00:16:26.309 "listen_address": { 00:16:26.309 "trtype": "TCP", 00:16:26.309 "adrfam": "IPv4", 00:16:26.309 "traddr": "10.0.0.2", 00:16:26.309 "trsvcid": "4420" 00:16:26.309 }, 00:16:26.309 "peer_address": { 00:16:26.309 "trtype": "TCP", 00:16:26.309 "adrfam": "IPv4", 00:16:26.309 "traddr": "10.0.0.1", 00:16:26.309 "trsvcid": "50404" 00:16:26.309 }, 00:16:26.309 "auth": { 00:16:26.309 "state": "completed", 00:16:26.309 "digest": "sha512", 00:16:26.309 "dhgroup": "ffdhe4096" 00:16:26.309 } 00:16:26.309 } 00:16:26.309 ]' 00:16:26.309 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.567 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.824 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.757 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.324 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:28.324 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.324 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.324 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.324 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.325 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.890 00:16:28.890 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.890 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.890 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.148 { 00:16:29.148 "cntlid": 129, 00:16:29.148 "qid": 0, 00:16:29.148 "state": "enabled", 00:16:29.148 "thread": "nvmf_tgt_poll_group_000", 00:16:29.148 "listen_address": { 00:16:29.148 "trtype": "TCP", 00:16:29.148 "adrfam": "IPv4", 00:16:29.148 "traddr": "10.0.0.2", 00:16:29.148 "trsvcid": "4420" 00:16:29.148 }, 00:16:29.148 "peer_address": { 00:16:29.148 "trtype": "TCP", 00:16:29.148 "adrfam": "IPv4", 00:16:29.148 "traddr": "10.0.0.1", 00:16:29.148 "trsvcid": "50440" 00:16:29.148 }, 00:16:29.148 "auth": { 00:16:29.148 "state": "completed", 00:16:29.148 "digest": "sha512", 00:16:29.148 "dhgroup": "ffdhe6144" 00:16:29.148 } 00:16:29.148 } 00:16:29.148 ]' 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.148 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.406 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.823 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.082 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.648 00:16:31.648 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.648 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.648 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.906 { 00:16:31.906 "cntlid": 131, 00:16:31.906 "qid": 0, 00:16:31.906 "state": "enabled", 00:16:31.906 "thread": "nvmf_tgt_poll_group_000", 00:16:31.906 "listen_address": { 00:16:31.906 "trtype": "TCP", 00:16:31.906 "adrfam": "IPv4", 00:16:31.906 "traddr": "10.0.0.2", 00:16:31.906 "trsvcid": "4420" 00:16:31.906 }, 00:16:31.906 "peer_address": { 00:16:31.906 "trtype": "TCP", 00:16:31.906 "adrfam": "IPv4", 00:16:31.906 "traddr": "10.0.0.1", 00:16:31.906 "trsvcid": "50464" 00:16:31.906 }, 00:16:31.906 "auth": { 00:16:31.906 "state": "completed", 00:16:31.906 "digest": "sha512", 00:16:31.906 "dhgroup": "ffdhe6144" 00:16:31.906 } 00:16:31.906 } 00:16:31.906 ]' 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.906 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.164 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.164 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.164 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.164 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.164 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.422 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.795 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.726 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.726 { 00:16:34.726 "cntlid": 133, 00:16:34.726 "qid": 0, 00:16:34.726 "state": "enabled", 00:16:34.726 "thread": "nvmf_tgt_poll_group_000", 00:16:34.726 "listen_address": { 00:16:34.726 "trtype": "TCP", 00:16:34.726 "adrfam": "IPv4", 00:16:34.726 "traddr": "10.0.0.2", 00:16:34.726 "trsvcid": "4420" 00:16:34.726 }, 00:16:34.726 "peer_address": { 00:16:34.726 "trtype": "TCP", 00:16:34.726 "adrfam": "IPv4", 00:16:34.726 "traddr": "10.0.0.1", 00:16:34.726 "trsvcid": "50500" 00:16:34.726 }, 00:16:34.726 "auth": { 00:16:34.726 "state": "completed", 00:16:34.726 "digest": "sha512", 00:16:34.726 "dhgroup": "ffdhe6144" 00:16:34.726 } 00:16:34.726 } 00:16:34.726 ]' 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.726 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.984 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.984 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.984 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.984 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.984 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.242 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.616 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.617 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.550 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.551 { 00:16:37.551 "cntlid": 135, 00:16:37.551 "qid": 0, 00:16:37.551 "state": "enabled", 00:16:37.551 "thread": "nvmf_tgt_poll_group_000", 00:16:37.551 "listen_address": { 00:16:37.551 "trtype": "TCP", 00:16:37.551 "adrfam": "IPv4", 00:16:37.551 "traddr": "10.0.0.2", 00:16:37.551 "trsvcid": "4420" 00:16:37.551 }, 00:16:37.551 "peer_address": { 00:16:37.551 "trtype": "TCP", 00:16:37.551 "adrfam": "IPv4", 00:16:37.551 "traddr": "10.0.0.1", 00:16:37.551 "trsvcid": "49982" 00:16:37.551 }, 00:16:37.551 "auth": { 00:16:37.551 "state": "completed", 00:16:37.551 "digest": "sha512", 00:16:37.551 "dhgroup": "ffdhe6144" 00:16:37.551 } 00:16:37.551 } 00:16:37.551 ]' 00:16:37.551 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.808 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.066 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.441 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.699 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.633 00:16:40.633 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.633 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.633 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.891 { 00:16:40.891 "cntlid": 137, 00:16:40.891 "qid": 0, 00:16:40.891 "state": "enabled", 00:16:40.891 "thread": "nvmf_tgt_poll_group_000", 00:16:40.891 "listen_address": { 00:16:40.891 "trtype": "TCP", 00:16:40.891 "adrfam": "IPv4", 00:16:40.891 "traddr": "10.0.0.2", 00:16:40.891 "trsvcid": "4420" 00:16:40.891 }, 00:16:40.891 "peer_address": { 00:16:40.891 "trtype": "TCP", 00:16:40.891 "adrfam": "IPv4", 00:16:40.891 "traddr": "10.0.0.1", 00:16:40.891 "trsvcid": "50014" 00:16:40.891 }, 00:16:40.891 "auth": { 00:16:40.891 "state": "completed", 00:16:40.891 "digest": "sha512", 00:16:40.891 "dhgroup": "ffdhe8192" 00:16:40.891 } 00:16:40.891 } 00:16:40.891 ]' 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.891 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.148 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.148 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.148 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.148 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.148 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.406 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.778 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.712 00:16:43.970 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.970 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.970 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.228 { 00:16:44.228 "cntlid": 139, 00:16:44.228 "qid": 0, 00:16:44.228 "state": "enabled", 00:16:44.228 "thread": "nvmf_tgt_poll_group_000", 00:16:44.228 "listen_address": { 00:16:44.228 "trtype": "TCP", 00:16:44.228 "adrfam": "IPv4", 00:16:44.228 "traddr": "10.0.0.2", 00:16:44.228 "trsvcid": "4420" 00:16:44.228 }, 00:16:44.228 "peer_address": { 00:16:44.228 "trtype": "TCP", 00:16:44.228 "adrfam": "IPv4", 00:16:44.228 "traddr": "10.0.0.1", 00:16:44.228 "trsvcid": "50046" 00:16:44.228 }, 00:16:44.228 "auth": { 00:16:44.228 "state": "completed", 00:16:44.228 "digest": "sha512", 00:16:44.228 "dhgroup": "ffdhe8192" 00:16:44.228 } 00:16:44.228 } 00:16:44.228 ]' 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.228 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.487 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZTU0NTZlNDcwZGIyYTJmZThhZGRlYzY3YjU2MzUxZDIiCriq: --dhchap-ctrl-secret DHHC-1:02:ZGQwMTJhYTY5ZmUxMTk5OTAzZDI3NzU3Mjk1YTA0ZGM5OWY0YjI1NjAyOTBmZDdmg/R3jA==: 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.857 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.114 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.045 00:16:47.045 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.045 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.045 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.610 { 00:16:47.610 "cntlid": 141, 00:16:47.610 "qid": 0, 00:16:47.610 "state": "enabled", 00:16:47.610 "thread": "nvmf_tgt_poll_group_000", 00:16:47.610 "listen_address": { 00:16:47.610 "trtype": "TCP", 00:16:47.610 "adrfam": "IPv4", 00:16:47.610 "traddr": "10.0.0.2", 00:16:47.610 "trsvcid": "4420" 00:16:47.610 }, 00:16:47.610 "peer_address": { 00:16:47.610 "trtype": "TCP", 00:16:47.610 "adrfam": "IPv4", 00:16:47.610 "traddr": "10.0.0.1", 00:16:47.610 "trsvcid": "43654" 00:16:47.610 }, 00:16:47.610 "auth": { 00:16:47.610 "state": "completed", 00:16:47.610 "digest": "sha512", 00:16:47.610 "dhgroup": "ffdhe8192" 00:16:47.610 } 00:16:47.610 } 00:16:47.610 ]' 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.610 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.868 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NmFlMDZhZjdhODM0ZDA0MTQ4Y2UwNTI4ODNmMzkwOGY0NjRmYmY2NDMwZGI0MTRmxEjbmQ==: --dhchap-ctrl-secret DHHC-1:01:ZTFmODExNWYyMGJjMTc3Y2JhNGM4ZTJlNDIyNGQ1MjTabmie: 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.277 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.277 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.649 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.649 { 00:16:50.649 "cntlid": 143, 00:16:50.649 "qid": 0, 00:16:50.649 "state": "enabled", 00:16:50.649 "thread": "nvmf_tgt_poll_group_000", 00:16:50.649 "listen_address": { 00:16:50.649 "trtype": "TCP", 00:16:50.649 "adrfam": "IPv4", 00:16:50.649 "traddr": "10.0.0.2", 00:16:50.649 "trsvcid": "4420" 00:16:50.649 }, 00:16:50.649 "peer_address": { 00:16:50.649 "trtype": "TCP", 00:16:50.649 "adrfam": "IPv4", 00:16:50.649 "traddr": "10.0.0.1", 00:16:50.649 "trsvcid": "43694" 00:16:50.649 }, 00:16:50.649 "auth": { 00:16:50.649 "state": "completed", 00:16:50.649 "digest": "sha512", 00:16:50.649 "dhgroup": "ffdhe8192" 00:16:50.649 } 00:16:50.649 } 00:16:50.649 ]' 00:16:50.649 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.907 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.165 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.538 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.538 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.911 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.911 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.911 { 00:16:53.911 "cntlid": 145, 00:16:53.911 "qid": 0, 00:16:53.911 "state": "enabled", 00:16:53.912 "thread": "nvmf_tgt_poll_group_000", 00:16:53.912 "listen_address": { 00:16:53.912 "trtype": "TCP", 00:16:53.912 "adrfam": "IPv4", 00:16:53.912 "traddr": "10.0.0.2", 00:16:53.912 "trsvcid": "4420" 00:16:53.912 }, 00:16:53.912 "peer_address": { 00:16:53.912 "trtype": "TCP", 00:16:53.912 "adrfam": "IPv4", 00:16:53.912 "traddr": "10.0.0.1", 00:16:53.912 "trsvcid": "43714" 00:16:53.912 }, 00:16:53.912 "auth": { 00:16:53.912 "state": "completed", 00:16:53.912 "digest": "sha512", 00:16:53.912 "dhgroup": "ffdhe8192" 00:16:53.912 } 00:16:53.912 } 00:16:53.912 ]' 00:16:53.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.169 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.169 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.169 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.169 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.169 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.427 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NTM4ODNlMzcxYzAyNjlmNmZhZTRhYWY1Y2M5MGJkYTNhZTBiMGVjZWJhODY1YTFhUtLRyg==: --dhchap-ctrl-secret DHHC-1:03:MGFlZWQ3NDhjN2I3MjgwN2FlMGUyZGQ4N2U0MjEzYjQ4OGNjNzY5ZjkwOWJiMTZmY2FhNTkyNWVhMDJhODVjNZIdEAk=: 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.801 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:56.734 request: 00:16:56.734 { 00:16:56.734 "name": "nvme0", 00:16:56.735 "trtype": "tcp", 00:16:56.735 "traddr": "10.0.0.2", 00:16:56.735 "adrfam": "ipv4", 00:16:56.735 "trsvcid": "4420", 00:16:56.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:56.735 "prchk_reftag": false, 00:16:56.735 "prchk_guard": false, 00:16:56.735 "hdgst": false, 00:16:56.735 "ddgst": false, 00:16:56.735 "dhchap_key": "key2", 00:16:56.735 "method": "bdev_nvme_attach_controller", 00:16:56.735 "req_id": 1 00:16:56.735 } 00:16:56.735 Got JSON-RPC error response 00:16:56.735 response: 00:16:56.735 { 00:16:56.735 "code": -5, 00:16:56.735 "message": "Input/output error" 00:16:56.735 } 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.735 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.669 request: 00:16:57.669 { 00:16:57.669 "name": "nvme0", 00:16:57.669 "trtype": "tcp", 00:16:57.669 "traddr": "10.0.0.2", 00:16:57.669 "adrfam": "ipv4", 00:16:57.669 "trsvcid": "4420", 00:16:57.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:57.669 "prchk_reftag": false, 00:16:57.669 "prchk_guard": false, 00:16:57.669 "hdgst": false, 00:16:57.669 "ddgst": false, 00:16:57.669 "dhchap_key": "key1", 00:16:57.669 "dhchap_ctrlr_key": "ckey2", 00:16:57.669 "method": "bdev_nvme_attach_controller", 00:16:57.669 "req_id": 1 00:16:57.669 } 00:16:57.669 Got JSON-RPC error response 00:16:57.669 response: 00:16:57.669 { 00:16:57.669 "code": -5, 00:16:57.669 "message": "Input/output error" 00:16:57.669 } 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:57.669 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.670 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.635 request: 00:16:58.635 { 00:16:58.635 "name": "nvme0", 00:16:58.635 "trtype": "tcp", 00:16:58.635 "traddr": "10.0.0.2", 00:16:58.635 "adrfam": "ipv4", 00:16:58.635 "trsvcid": "4420", 00:16:58.635 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:58.635 "prchk_reftag": false, 00:16:58.635 "prchk_guard": false, 00:16:58.635 "hdgst": false, 00:16:58.635 "ddgst": false, 00:16:58.635 "dhchap_key": "key1", 00:16:58.635 "dhchap_ctrlr_key": "ckey1", 00:16:58.635 "method": "bdev_nvme_attach_controller", 00:16:58.635 "req_id": 1 00:16:58.635 } 00:16:58.635 Got JSON-RPC error response 00:16:58.635 response: 00:16:58.635 { 00:16:58.635 "code": -5, 00:16:58.635 "message": "Input/output error" 00:16:58.635 } 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1492464 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1492464 ']' 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1492464 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1492464 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1492464' 00:16:58.635 killing process with pid 1492464 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1492464 00:16:58.635 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1492464 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1513096 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1513096 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1513096 ']' 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.896 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1513096 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1513096 ']' 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.154 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.155 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.155 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.413 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.413 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:59.413 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:59.413 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.413 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.672 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.607 00:17:00.607 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.607 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.607 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.865 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.865 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.865 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.865 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.123 { 00:17:01.123 "cntlid": 1, 00:17:01.123 "qid": 0, 00:17:01.123 "state": "enabled", 00:17:01.123 "thread": "nvmf_tgt_poll_group_000", 00:17:01.123 "listen_address": { 00:17:01.123 "trtype": "TCP", 00:17:01.123 "adrfam": "IPv4", 00:17:01.123 "traddr": "10.0.0.2", 00:17:01.123 "trsvcid": "4420" 00:17:01.123 }, 00:17:01.123 "peer_address": { 00:17:01.123 "trtype": "TCP", 00:17:01.123 "adrfam": "IPv4", 00:17:01.123 "traddr": "10.0.0.1", 00:17:01.123 "trsvcid": "51568" 00:17:01.123 }, 00:17:01.123 "auth": { 00:17:01.123 "state": "completed", 00:17:01.123 "digest": "sha512", 00:17:01.123 "dhgroup": "ffdhe8192" 00:17:01.123 } 00:17:01.123 } 00:17:01.123 ]' 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.123 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.381 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:M2M3NmEyMGQ4NTMzMzk4YWYyYTgyY2Y1NjU0YTVkYWVhZjcwYzU5N2EyOGM2ZjhiNGFlZTQxMzRiNjcwY2Q3NRCs8V0=: 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:02.754 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.012 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.269 request: 00:17:03.269 { 00:17:03.269 "name": "nvme0", 00:17:03.269 "trtype": "tcp", 00:17:03.269 "traddr": "10.0.0.2", 00:17:03.269 "adrfam": "ipv4", 00:17:03.269 "trsvcid": "4420", 00:17:03.269 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:03.269 "prchk_reftag": false, 00:17:03.269 "prchk_guard": false, 00:17:03.269 "hdgst": false, 00:17:03.269 "ddgst": false, 00:17:03.269 "dhchap_key": "key3", 00:17:03.269 "method": "bdev_nvme_attach_controller", 00:17:03.269 "req_id": 1 00:17:03.269 } 00:17:03.269 Got JSON-RPC error response 00:17:03.269 response: 00:17:03.269 { 00:17:03.269 "code": -5, 00:17:03.269 "message": "Input/output error" 00:17:03.269 } 00:17:03.269 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:03.269 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.269 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.269 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.270 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:03.270 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:03.270 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:03.270 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.527 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.528 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.786 request: 00:17:03.786 { 00:17:03.786 "name": "nvme0", 00:17:03.786 "trtype": "tcp", 00:17:03.786 "traddr": "10.0.0.2", 00:17:03.786 "adrfam": "ipv4", 00:17:03.786 "trsvcid": "4420", 00:17:03.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:03.786 "prchk_reftag": false, 00:17:03.786 "prchk_guard": false, 00:17:03.786 "hdgst": false, 00:17:03.786 "ddgst": false, 00:17:03.786 "dhchap_key": "key3", 00:17:03.786 "method": "bdev_nvme_attach_controller", 00:17:03.786 "req_id": 1 00:17:03.786 } 00:17:03.786 Got JSON-RPC error response 00:17:03.786 response: 00:17:03.786 { 00:17:03.786 "code": -5, 00:17:03.786 "message": "Input/output error" 00:17:03.786 } 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.786 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.044 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:04.306 request: 00:17:04.306 { 00:17:04.306 "name": "nvme0", 00:17:04.306 "trtype": "tcp", 00:17:04.306 "traddr": "10.0.0.2", 00:17:04.306 "adrfam": "ipv4", 00:17:04.306 "trsvcid": "4420", 00:17:04.306 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:04.306 "prchk_reftag": false, 00:17:04.306 "prchk_guard": false, 00:17:04.306 "hdgst": false, 00:17:04.306 "ddgst": false, 00:17:04.306 "dhchap_key": "key0", 00:17:04.306 "dhchap_ctrlr_key": "key1", 00:17:04.306 "method": "bdev_nvme_attach_controller", 00:17:04.306 "req_id": 1 00:17:04.306 } 00:17:04.306 Got JSON-RPC error response 00:17:04.306 response: 00:17:04.306 { 00:17:04.306 "code": -5, 00:17:04.306 "message": "Input/output error" 00:17:04.306 } 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:04.306 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:04.564 00:17:04.564 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:04.565 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:04.565 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.822 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.822 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.822 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1492553 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1492553 ']' 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1492553 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1492553 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1492553' 00:17:05.080 killing process with pid 1492553 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1492553 00:17:05.080 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1492553 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.338 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.338 rmmod nvme_tcp 00:17:05.596 rmmod nvme_fabrics 00:17:05.596 rmmod nvme_keyring 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1513096 ']' 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1513096 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1513096 ']' 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1513096 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1513096 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1513096' 00:17:05.596 killing process with pid 1513096 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1513096 00:17:05.596 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1513096 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.854 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uL5 /tmp/spdk.key-sha256.IwW /tmp/spdk.key-sha384.Gi1 /tmp/spdk.key-sha512.CO0 /tmp/spdk.key-sha512.ur4 /tmp/spdk.key-sha384.gzK /tmp/spdk.key-sha256.xpK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:07.762 00:17:07.762 real 3m38.735s 00:17:07.762 user 8m29.529s 00:17:07.762 sys 0m25.797s 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.762 ************************************ 00:17:07.762 END TEST nvmf_auth_target 00:17:07.762 ************************************ 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.762 ************************************ 00:17:07.762 START TEST nvmf_bdevio_no_huge 00:17:07.762 ************************************ 00:17:07.762 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:08.021 * Looking for test storage... 00:17:08.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.021 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.022 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.931 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:09.932 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:09.932 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:09.932 Found net devices under 0000:08:00.0: cvl_0_0 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:09.932 Found net devices under 0000:08:00.1: cvl_0_1 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:09.932 00:17:09.932 --- 10.0.0.2 ping statistics --- 00:17:09.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.932 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:17:09.932 00:17:09.932 --- 10.0.0.1 ping statistics --- 00:17:09.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.932 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:09.932 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1515249 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1515249 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1515249 ']' 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.933 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:09.933 [2024-07-25 10:23:59.409997] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:09.933 [2024-07-25 10:23:59.410096] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:09.933 [2024-07-25 10:23:59.484227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.933 [2024-07-25 10:23:59.609615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.933 [2024-07-25 10:23:59.609682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.933 [2024-07-25 10:23:59.609698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.933 [2024-07-25 10:23:59.609711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.933 [2024-07-25 10:23:59.609723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.933 [2024-07-25 10:23:59.609811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.933 [2024-07-25 10:23:59.609916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:09.933 [2024-07-25 10:23:59.609919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.933 [2024-07-25 10:23:59.609865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 [2024-07-25 10:23:59.741516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 Malloc0 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.192 [2024-07-25 10:23:59.780274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.192 { 00:17:10.192 "params": { 00:17:10.192 "name": "Nvme$subsystem", 00:17:10.192 "trtype": "$TEST_TRANSPORT", 00:17:10.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.192 "adrfam": "ipv4", 00:17:10.192 "trsvcid": "$NVMF_PORT", 00:17:10.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.192 "hdgst": ${hdgst:-false}, 00:17:10.192 "ddgst": ${ddgst:-false} 00:17:10.192 }, 00:17:10.192 "method": "bdev_nvme_attach_controller" 00:17:10.192 } 00:17:10.192 EOF 00:17:10.192 )") 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:10.192 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.192 "params": { 00:17:10.192 "name": "Nvme1", 00:17:10.192 "trtype": "tcp", 00:17:10.192 "traddr": "10.0.0.2", 00:17:10.192 "adrfam": "ipv4", 00:17:10.192 "trsvcid": "4420", 00:17:10.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.192 "hdgst": false, 00:17:10.192 "ddgst": false 00:17:10.192 }, 00:17:10.192 "method": "bdev_nvme_attach_controller" 00:17:10.192 }' 00:17:10.192 [2024-07-25 10:23:59.831015] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:10.192 [2024-07-25 10:23:59.831117] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1515282 ] 00:17:10.192 [2024-07-25 10:23:59.897029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.450 [2024-07-25 10:24:00.026504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.450 [2024-07-25 10:24:00.026592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.450 [2024-07-25 10:24:00.026625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.708 I/O targets: 00:17:10.708 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:10.708 00:17:10.708 00:17:10.708 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.708 http://cunit.sourceforge.net/ 00:17:10.708 00:17:10.708 00:17:10.708 Suite: bdevio tests on: Nvme1n1 00:17:10.708 Test: blockdev write read block ...passed 00:17:10.708 Test: blockdev write zeroes read block ...passed 00:17:10.708 Test: blockdev write zeroes read no split ...passed 00:17:10.708 Test: blockdev write zeroes read split ...passed 00:17:10.966 Test: blockdev write zeroes read split partial ...passed 00:17:10.966 Test: blockdev reset ...[2024-07-25 10:24:00.517104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.966 [2024-07-25 10:24:00.517219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3c570 (9): Bad file descriptor 00:17:10.966 [2024-07-25 10:24:00.534363] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.966 passed 00:17:10.966 Test: blockdev write read 8 blocks ...passed 00:17:10.967 Test: blockdev write read size > 128k ...passed 00:17:10.967 Test: blockdev write read invalid size ...passed 00:17:10.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.967 Test: blockdev write read max offset ...passed 00:17:10.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.967 Test: blockdev writev readv 8 blocks ...passed 00:17:10.967 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.967 Test: blockdev writev readv block ...passed 00:17:11.226 Test: blockdev writev readv size > 128k ...passed 00:17:11.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:11.226 Test: blockdev comparev and writev ...[2024-07-25 10:24:00.788129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.788169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.788196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.788214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.788587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.788622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.788645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.788662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.789028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.789054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.226 [2024-07-25 10:24:00.789094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:11.226 [2024-07-25 10:24:00.789434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.227 [2024-07-25 10:24:00.789461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:11.227 [2024-07-25 10:24:00.789492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.227 [2024-07-25 10:24:00.789530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:11.227 passed 00:17:11.227 Test: blockdev nvme passthru rw ...passed 00:17:11.227 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:24:00.871806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.227 [2024-07-25 10:24:00.871837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:11.227 [2024-07-25 10:24:00.872019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.227 [2024-07-25 10:24:00.872043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:11.227 [2024-07-25 10:24:00.872215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.227 [2024-07-25 10:24:00.872238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:11.227 [2024-07-25 10:24:00.872414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.227 [2024-07-25 10:24:00.872437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:11.227 passed 00:17:11.227 Test: blockdev nvme admin passthru ...passed 00:17:11.227 Test: blockdev copy ...passed 00:17:11.227 00:17:11.227 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.227 suites 1 1 n/a 0 0 00:17:11.227 tests 23 23 23 0 0 00:17:11.227 asserts 152 152 152 0 n/a 00:17:11.227 00:17:11.227 Elapsed time = 1.239 seconds 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.792 rmmod nvme_tcp 00:17:11.792 rmmod nvme_fabrics 00:17:11.792 rmmod nvme_keyring 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1515249 ']' 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1515249 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1515249 ']' 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1515249 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1515249 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1515249' 00:17:11.792 killing process with pid 1515249 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1515249 00:17:11.792 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1515249 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.051 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.594 00:17:14.594 real 0m6.341s 00:17:14.594 user 0m11.471s 00:17:14.594 sys 0m2.267s 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.594 ************************************ 00:17:14.594 END TEST nvmf_bdevio_no_huge 00:17:14.594 ************************************ 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.594 ************************************ 00:17:14.594 START TEST nvmf_tls 00:17:14.594 ************************************ 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.594 * Looking for test storage... 00:17:14.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.594 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.595 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.973 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:15.974 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:15.974 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:15.974 Found net devices under 0000:08:00.0: cvl_0_0 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:15.974 Found net devices under 0000:08:00.1: cvl_0_1 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:17:15.974 00:17:15.974 --- 10.0.0.2 ping statistics --- 00:17:15.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.974 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:17:15.974 00:17:15.974 --- 10.0.0.1 ping statistics --- 00:17:15.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.974 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1517078 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1517078 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1517078 ']' 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.974 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.233 [2024-07-25 10:24:05.776079] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:16.233 [2024-07-25 10:24:05.776179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.233 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.233 [2024-07-25 10:24:05.843823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.233 [2024-07-25 10:24:05.959419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.233 [2024-07-25 10:24:05.959489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.233 [2024-07-25 10:24:05.959507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.233 [2024-07-25 10:24:05.959521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.233 [2024-07-25 10:24:05.959532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.233 [2024-07-25 10:24:05.959570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:16.491 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:16.749 true 00:17:16.749 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:16.749 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:17.007 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:17.007 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:17.007 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:17.264 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.264 10:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:17.522 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:17.522 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:17.522 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.780 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:18.038 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:18.038 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:18.038 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:18.295 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:18.295 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.552 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:18.552 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:18.552 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:18.809 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.809 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:19.067 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.a6gS4wWXf5 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.NOf2I28TKq 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.a6gS4wWXf5 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NOf2I28TKq 00:17:19.324 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:19.581 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:19.838 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.a6gS4wWXf5 00:17:19.838 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.a6gS4wWXf5 00:17:19.838 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.095 [2024-07-25 10:24:09.677900] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.095 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:20.352 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:20.608 [2024-07-25 10:24:10.191296] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.608 [2024-07-25 10:24:10.191536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.608 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:20.865 malloc0 00:17:20.865 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.122 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a6gS4wWXf5 00:17:21.379 [2024-07-25 10:24:10.926730] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:21.379 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.a6gS4wWXf5 00:17:21.379 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.376 Initializing NVMe Controllers 00:17:31.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:31.377 Initialization complete. Launching workers. 00:17:31.377 ======================================================== 00:17:31.377 Latency(us) 00:17:31.377 Device Information : IOPS MiB/s Average min max 00:17:31.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7563.40 29.54 8464.62 1105.30 9840.37 00:17:31.377 ======================================================== 00:17:31.377 Total : 7563.40 29.54 8464.62 1105.30 9840.37 00:17:31.377 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6gS4wWXf5 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a6gS4wWXf5' 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1519047 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1519047 /var/tmp/bdevperf.sock 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1519047 ']' 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.377 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.377 [2024-07-25 10:24:21.106726] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:31.377 [2024-07-25 10:24:21.106822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519047 ] 00:17:31.657 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.657 [2024-07-25 10:24:21.182276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.657 [2024-07-25 10:24:21.338339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.920 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.920 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:31.920 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a6gS4wWXf5 00:17:32.178 [2024-07-25 10:24:21.747495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.178 [2024-07-25 10:24:21.747629] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.178 TLSTESTn1 00:17:32.178 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:32.178 Running I/O for 10 seconds... 00:17:44.374 00:17:44.374 Latency(us) 00:17:44.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.375 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:44.375 Verification LBA range: start 0x0 length 0x2000 00:17:44.375 TLSTESTn1 : 10.04 3858.37 15.07 0.00 0.00 33089.36 8446.86 52817.16 00:17:44.375 =================================================================================================================== 00:17:44.375 Total : 3858.37 15.07 0.00 0.00 33089.36 8446.86 52817.16 00:17:44.375 0 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1519047 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1519047 ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1519047 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1519047 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1519047' 00:17:44.375 killing process with pid 1519047 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1519047 00:17:44.375 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.375 00:17:44.375 Latency(us) 00:17:44.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.375 =================================================================================================================== 00:17:44.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.375 [2024-07-25 10:24:32.059026] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1519047 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NOf2I28TKq 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NOf2I28TKq 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NOf2I28TKq 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NOf2I28TKq' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1520049 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1520049 /var/tmp/bdevperf.sock 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520049 ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.375 [2024-07-25 10:24:32.298802] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:44.375 [2024-07-25 10:24:32.298897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520049 ] 00:17:44.375 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.375 [2024-07-25 10:24:32.354927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.375 [2024-07-25 10:24:32.454548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NOf2I28TKq 00:17:44.375 [2024-07-25 10:24:32.834241] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.375 [2024-07-25 10:24:32.834379] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.375 [2024-07-25 10:24:32.839582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.375 [2024-07-25 10:24:32.840138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bd470 (107): Transport endpoint is not connected 00:17:44.375 [2024-07-25 10:24:32.841135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bd470 (9): Bad file descriptor 00:17:44.375 [2024-07-25 10:24:32.842127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.375 [2024-07-25 10:24:32.842150] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.375 [2024-07-25 10:24:32.842179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.375 request: 00:17:44.375 { 00:17:44.375 "name": "TLSTEST", 00:17:44.375 "trtype": "tcp", 00:17:44.375 "traddr": "10.0.0.2", 00:17:44.375 "adrfam": "ipv4", 00:17:44.375 "trsvcid": "4420", 00:17:44.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.375 "prchk_reftag": false, 00:17:44.375 "prchk_guard": false, 00:17:44.375 "hdgst": false, 00:17:44.375 "ddgst": false, 00:17:44.375 "psk": "/tmp/tmp.NOf2I28TKq", 00:17:44.375 "method": "bdev_nvme_attach_controller", 00:17:44.375 "req_id": 1 00:17:44.375 } 00:17:44.375 Got JSON-RPC error response 00:17:44.375 response: 00:17:44.375 { 00:17:44.375 "code": -5, 00:17:44.375 "message": "Input/output error" 00:17:44.375 } 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1520049 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520049 ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520049 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520049 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520049' 00:17:44.375 killing process with pid 1520049 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520049 00:17:44.375 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.375 00:17:44.375 Latency(us) 00:17:44.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.375 =================================================================================================================== 00:17:44.375 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.375 [2024-07-25 10:24:32.890586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.375 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520049 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a6gS4wWXf5 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a6gS4wWXf5 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.375 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a6gS4wWXf5 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a6gS4wWXf5' 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1520104 /var/tmp/bdevperf.sock 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520104 ']' 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.376 [2024-07-25 10:24:33.115248] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:44.376 [2024-07-25 10:24:33.115333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520104 ] 00:17:44.376 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.376 [2024-07-25 10:24:33.166843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.376 [2024-07-25 10:24:33.263806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.a6gS4wWXf5 00:17:44.376 [2024-07-25 10:24:33.653151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.376 [2024-07-25 10:24:33.653280] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.376 [2024-07-25 10:24:33.658665] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.376 [2024-07-25 10:24:33.658693] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.376 [2024-07-25 10:24:33.658740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.376 [2024-07-25 10:24:33.658927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47470 (107): Transport endpoint is not connected 00:17:44.376 [2024-07-25 10:24:33.659906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa47470 (9): Bad file descriptor 00:17:44.376 [2024-07-25 10:24:33.660906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.376 [2024-07-25 10:24:33.660922] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.376 [2024-07-25 10:24:33.660957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.376 request: 00:17:44.376 { 00:17:44.376 "name": "TLSTEST", 00:17:44.376 "trtype": "tcp", 00:17:44.376 "traddr": "10.0.0.2", 00:17:44.376 "adrfam": "ipv4", 00:17:44.376 "trsvcid": "4420", 00:17:44.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.376 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:44.376 "prchk_reftag": false, 00:17:44.376 "prchk_guard": false, 00:17:44.376 "hdgst": false, 00:17:44.376 "ddgst": false, 00:17:44.376 "psk": "/tmp/tmp.a6gS4wWXf5", 00:17:44.376 "method": "bdev_nvme_attach_controller", 00:17:44.376 "req_id": 1 00:17:44.376 } 00:17:44.376 Got JSON-RPC error response 00:17:44.376 response: 00:17:44.376 { 00:17:44.376 "code": -5, 00:17:44.376 "message": "Input/output error" 00:17:44.376 } 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520104 ']' 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520104' 00:17:44.376 killing process with pid 1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520104 00:17:44.376 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.376 00:17:44.376 Latency(us) 00:17:44.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.376 =================================================================================================================== 00:17:44.376 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.376 [2024-07-25 10:24:33.698543] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520104 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6gS4wWXf5 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6gS4wWXf5 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6gS4wWXf5 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.376 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a6gS4wWXf5' 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1520171 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1520171 /var/tmp/bdevperf.sock 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520171 ']' 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.377 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.377 [2024-07-25 10:24:33.910770] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:44.377 [2024-07-25 10:24:33.910864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520171 ] 00:17:44.377 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.377 [2024-07-25 10:24:33.959447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.377 [2024-07-25 10:24:34.056743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.635 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.635 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:44.635 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a6gS4wWXf5 00:17:44.635 [2024-07-25 10:24:34.406802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.635 [2024-07-25 10:24:34.406919] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.635 [2024-07-25 10:24:34.411923] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:44.635 [2024-07-25 10:24:34.411952] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:44.635 [2024-07-25 10:24:34.411987] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.893 [2024-07-25 10:24:34.412607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee470 (107): Transport endpoint is not connected 00:17:44.893 [2024-07-25 10:24:34.413596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee470 (9): Bad file descriptor 00:17:44.893 [2024-07-25 10:24:34.414594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:44.893 [2024-07-25 10:24:34.414612] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.893 [2024-07-25 10:24:34.414629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:44.893 request: 00:17:44.893 { 00:17:44.893 "name": "TLSTEST", 00:17:44.893 "trtype": "tcp", 00:17:44.893 "traddr": "10.0.0.2", 00:17:44.893 "adrfam": "ipv4", 00:17:44.893 "trsvcid": "4420", 00:17:44.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:44.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.893 "prchk_reftag": false, 00:17:44.893 "prchk_guard": false, 00:17:44.893 "hdgst": false, 00:17:44.893 "ddgst": false, 00:17:44.893 "psk": "/tmp/tmp.a6gS4wWXf5", 00:17:44.893 "method": "bdev_nvme_attach_controller", 00:17:44.893 "req_id": 1 00:17:44.893 } 00:17:44.893 Got JSON-RPC error response 00:17:44.893 response: 00:17:44.893 { 00:17:44.893 "code": -5, 00:17:44.893 "message": "Input/output error" 00:17:44.893 } 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1520171 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520171 ']' 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520171 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520171 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520171' 00:17:44.893 killing process with pid 1520171 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520171 00:17:44.893 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.893 00:17:44.893 Latency(us) 00:17:44.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.893 =================================================================================================================== 00:17:44.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.893 [2024-07-25 10:24:34.454627] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520171 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.893 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1520271 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1520271 /var/tmp/bdevperf.sock 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520271 ']' 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.894 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.152 [2024-07-25 10:24:34.688556] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:45.152 [2024-07-25 10:24:34.688647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520271 ] 00:17:45.152 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.152 [2024-07-25 10:24:34.745973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.152 [2024-07-25 10:24:34.852969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.410 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.410 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:45.410 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:45.668 [2024-07-25 10:24:35.252818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.668 [2024-07-25 10:24:35.254984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186fa20 (9): Bad file descriptor 00:17:45.668 [2024-07-25 10:24:35.255980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.668 [2024-07-25 10:24:35.255999] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.668 [2024-07-25 10:24:35.256028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.668 request: 00:17:45.668 { 00:17:45.668 "name": "TLSTEST", 00:17:45.668 "trtype": "tcp", 00:17:45.668 "traddr": "10.0.0.2", 00:17:45.668 "adrfam": "ipv4", 00:17:45.668 "trsvcid": "4420", 00:17:45.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.668 "prchk_reftag": false, 00:17:45.668 "prchk_guard": false, 00:17:45.668 "hdgst": false, 00:17:45.668 "ddgst": false, 00:17:45.668 "method": "bdev_nvme_attach_controller", 00:17:45.668 "req_id": 1 00:17:45.668 } 00:17:45.668 Got JSON-RPC error response 00:17:45.668 response: 00:17:45.668 { 00:17:45.668 "code": -5, 00:17:45.668 "message": "Input/output error" 00:17:45.668 } 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1520271 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520271 ']' 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520271 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520271 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:45.668 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:45.669 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520271' 00:17:45.669 killing process with pid 1520271 00:17:45.669 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520271 00:17:45.669 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.669 00:17:45.669 Latency(us) 00:17:45.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.669 =================================================================================================================== 00:17:45.669 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.669 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520271 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1517078 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1517078 ']' 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1517078 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517078 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517078' 00:17:45.927 killing process with pid 1517078 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1517078 00:17:45.927 [2024-07-25 10:24:35.496116] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1517078 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:45.927 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.hps9t6l2BG 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.hps9t6l2BG 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1520391 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1520391 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520391 ']' 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.186 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.186 [2024-07-25 10:24:35.781901] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:46.186 [2024-07-25 10:24:35.782000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.186 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.186 [2024-07-25 10:24:35.832372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.186 [2024-07-25 10:24:35.925216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.186 [2024-07-25 10:24:35.925274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.186 [2024-07-25 10:24:35.925287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.186 [2024-07-25 10:24:35.925297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.186 [2024-07-25 10:24:35.925306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.186 [2024-07-25 10:24:35.925331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hps9t6l2BG 00:17:46.444 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:46.702 [2024-07-25 10:24:36.342592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.702 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:46.960 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.218 [2024-07-25 10:24:36.912029] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.218 [2024-07-25 10:24:36.912229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.218 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:47.476 malloc0 00:17:47.476 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.733 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:17:47.991 [2024-07-25 10:24:37.651125] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hps9t6l2BG 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hps9t6l2BG' 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1520609 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1520609 /var/tmp/bdevperf.sock 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1520609 ']' 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.991 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.991 [2024-07-25 10:24:37.708954] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:47.991 [2024-07-25 10:24:37.709042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520609 ] 00:17:47.991 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.991 [2024-07-25 10:24:37.757161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.249 [2024-07-25 10:24:37.852793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.249 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.249 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:48.249 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:17:48.507 [2024-07-25 10:24:38.159379] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.507 [2024-07-25 10:24:38.159478] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:48.507 TLSTESTn1 00:17:48.507 10:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:48.765 Running I/O for 10 seconds... 00:17:58.724 00:17:58.724 Latency(us) 00:17:58.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.724 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.724 Verification LBA range: start 0x0 length 0x2000 00:17:58.724 TLSTESTn1 : 10.02 3783.23 14.78 0.00 0.00 33764.82 8058.50 34369.99 00:17:58.724 =================================================================================================================== 00:17:58.724 Total : 3783.23 14.78 0.00 0.00 33764.82 8058.50 34369.99 00:17:58.724 0 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1520609 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520609 ']' 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520609 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520609 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520609' 00:17:58.724 killing process with pid 1520609 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520609 00:17:58.724 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.724 00:17:58.724 Latency(us) 00:17:58.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.724 =================================================================================================================== 00:17:58.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.724 [2024-07-25 10:24:48.453186] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:58.724 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520609 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.hps9t6l2BG 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hps9t6l2BG 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hps9t6l2BG 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hps9t6l2BG 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hps9t6l2BG' 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1521608 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1521608 /var/tmp/bdevperf.sock 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1521608 ']' 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.981 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 [2024-07-25 10:24:48.699017] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:17:58.981 [2024-07-25 10:24:48.699116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521608 ] 00:17:58.981 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.981 [2024-07-25 10:24:48.754918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.238 [2024-07-25 10:24:48.852384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.238 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.239 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:59.239 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:17:59.496 [2024-07-25 10:24:49.225774] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.496 [2024-07-25 10:24:49.225854] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:59.496 [2024-07-25 10:24:49.225869] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.hps9t6l2BG 00:17:59.496 request: 00:17:59.496 { 00:17:59.496 "name": "TLSTEST", 00:17:59.496 "trtype": "tcp", 00:17:59.496 "traddr": "10.0.0.2", 00:17:59.496 "adrfam": "ipv4", 00:17:59.496 "trsvcid": "4420", 00:17:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.496 "prchk_reftag": false, 00:17:59.496 "prchk_guard": false, 00:17:59.496 "hdgst": false, 00:17:59.496 "ddgst": false, 00:17:59.496 "psk": "/tmp/tmp.hps9t6l2BG", 00:17:59.496 "method": "bdev_nvme_attach_controller", 00:17:59.496 "req_id": 1 00:17:59.496 } 00:17:59.496 Got JSON-RPC error response 00:17:59.496 response: 00:17:59.496 { 00:17:59.496 "code": -1, 00:17:59.496 "message": "Operation not permitted" 00:17:59.496 } 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1521608 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1521608 ']' 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1521608 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1521608 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1521608' 00:17:59.496 killing process with pid 1521608 00:17:59.496 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1521608 00:17:59.496 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.496 00:17:59.496 Latency(us) 00:17:59.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.496 =================================================================================================================== 00:17:59.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1521608 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1520391 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1520391 ']' 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1520391 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520391 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520391' 00:17:59.755 killing process with pid 1520391 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1520391 00:17:59.755 [2024-07-25 10:24:49.483707] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:59.755 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1520391 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1521717 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1521717 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1521717 ']' 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.014 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.014 [2024-07-25 10:24:49.737071] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:00.014 [2024-07-25 10:24:49.737166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.014 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.272 [2024-07-25 10:24:49.797979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.272 [2024-07-25 10:24:49.900995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.272 [2024-07-25 10:24:49.901054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.272 [2024-07-25 10:24:49.901090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.272 [2024-07-25 10:24:49.901102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.272 [2024-07-25 10:24:49.901112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.272 [2024-07-25 10:24:49.901141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.272 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.272 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:00.272 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.272 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.272 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hps9t6l2BG 00:18:00.272 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.529 [2024-07-25 10:24:50.305815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.786 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.043 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.300 [2024-07-25 10:24:50.899398] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.300 [2024-07-25 10:24:50.899651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.300 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.557 malloc0 00:18:01.557 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.815 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:18:02.073 [2024-07-25 10:24:51.734898] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:02.073 [2024-07-25 10:24:51.734939] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:02.073 [2024-07-25 10:24:51.734986] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:02.073 request: 00:18:02.073 { 00:18:02.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.073 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.073 "psk": "/tmp/tmp.hps9t6l2BG", 00:18:02.073 "method": "nvmf_subsystem_add_host", 00:18:02.073 "req_id": 1 00:18:02.073 } 00:18:02.073 Got JSON-RPC error response 00:18:02.073 response: 00:18:02.073 { 00:18:02.073 "code": -32603, 00:18:02.073 "message": "Internal error" 00:18:02.073 } 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1521717 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1521717 ']' 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1521717 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1521717 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1521717' 00:18:02.073 killing process with pid 1521717 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1521717 00:18:02.073 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1521717 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.hps9t6l2BG 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1521950 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1521950 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1521950 ']' 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.331 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.331 [2024-07-25 10:24:52.027132] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:02.331 [2024-07-25 10:24:52.027227] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.331 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.331 [2024-07-25 10:24:52.086802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.588 [2024-07-25 10:24:52.190579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.588 [2024-07-25 10:24:52.190644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.588 [2024-07-25 10:24:52.190678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.588 [2024-07-25 10:24:52.190690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.588 [2024-07-25 10:24:52.190700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.588 [2024-07-25 10:24:52.190727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hps9t6l2BG 00:18:02.588 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.846 [2024-07-25 10:24:52.593762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.846 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.411 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.411 [2024-07-25 10:24:53.183346] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.411 [2024-07-25 10:24:53.183612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.668 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.924 malloc0 00:18:03.924 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:04.181 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:18:04.439 [2024-07-25 10:24:54.075371] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1522170 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1522170 /var/tmp/bdevperf.sock 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1522170 ']' 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.439 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.439 [2024-07-25 10:24:54.143176] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:04.439 [2024-07-25 10:24:54.143273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522170 ] 00:18:04.439 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.439 [2024-07-25 10:24:54.198876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.697 [2024-07-25 10:24:54.297440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.697 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.697 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:04.697 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:18:04.955 [2024-07-25 10:24:54.606700] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.955 [2024-07-25 10:24:54.606811] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:04.955 TLSTESTn1 00:18:04.955 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:05.520 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:05.520 "subsystems": [ 00:18:05.520 { 00:18:05.520 "subsystem": "keyring", 00:18:05.520 "config": [] 00:18:05.520 }, 00:18:05.520 { 00:18:05.520 "subsystem": "iobuf", 00:18:05.520 "config": [ 00:18:05.520 { 00:18:05.520 "method": "iobuf_set_options", 00:18:05.520 "params": { 00:18:05.520 "small_pool_count": 8192, 00:18:05.520 "large_pool_count": 1024, 00:18:05.520 "small_bufsize": 8192, 00:18:05.520 "large_bufsize": 135168 00:18:05.520 } 00:18:05.520 } 00:18:05.520 ] 00:18:05.520 }, 00:18:05.520 { 00:18:05.520 "subsystem": "sock", 00:18:05.520 "config": [ 00:18:05.520 { 00:18:05.520 "method": "sock_set_default_impl", 00:18:05.520 "params": { 00:18:05.520 "impl_name": "posix" 00:18:05.520 } 00:18:05.520 }, 00:18:05.520 { 00:18:05.520 "method": "sock_impl_set_options", 00:18:05.520 "params": { 00:18:05.520 "impl_name": "ssl", 00:18:05.520 "recv_buf_size": 4096, 00:18:05.520 "send_buf_size": 4096, 00:18:05.520 "enable_recv_pipe": true, 00:18:05.520 "enable_quickack": false, 00:18:05.520 "enable_placement_id": 0, 00:18:05.520 "enable_zerocopy_send_server": true, 00:18:05.520 "enable_zerocopy_send_client": false, 00:18:05.520 "zerocopy_threshold": 0, 00:18:05.520 "tls_version": 0, 00:18:05.520 "enable_ktls": false 00:18:05.520 } 00:18:05.520 }, 00:18:05.520 { 00:18:05.520 "method": "sock_impl_set_options", 00:18:05.520 "params": { 00:18:05.520 "impl_name": "posix", 00:18:05.520 "recv_buf_size": 2097152, 00:18:05.520 "send_buf_size": 2097152, 00:18:05.520 "enable_recv_pipe": true, 00:18:05.520 "enable_quickack": false, 00:18:05.520 "enable_placement_id": 0, 00:18:05.520 "enable_zerocopy_send_server": true, 00:18:05.521 "enable_zerocopy_send_client": false, 00:18:05.521 "zerocopy_threshold": 0, 00:18:05.521 "tls_version": 0, 00:18:05.521 "enable_ktls": false 00:18:05.521 } 00:18:05.521 } 00:18:05.521 ] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "vmd", 00:18:05.521 "config": [] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "accel", 00:18:05.521 "config": [ 00:18:05.521 { 00:18:05.521 "method": "accel_set_options", 00:18:05.521 "params": { 00:18:05.521 "small_cache_size": 128, 00:18:05.521 "large_cache_size": 16, 00:18:05.521 "task_count": 2048, 00:18:05.521 "sequence_count": 2048, 00:18:05.521 "buf_count": 2048 00:18:05.521 } 00:18:05.521 } 00:18:05.521 ] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "bdev", 00:18:05.521 "config": [ 00:18:05.521 { 00:18:05.521 "method": "bdev_set_options", 00:18:05.521 "params": { 00:18:05.521 "bdev_io_pool_size": 65535, 00:18:05.521 "bdev_io_cache_size": 256, 00:18:05.521 "bdev_auto_examine": true, 00:18:05.521 "iobuf_small_cache_size": 128, 00:18:05.521 "iobuf_large_cache_size": 16 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_raid_set_options", 00:18:05.521 "params": { 00:18:05.521 "process_window_size_kb": 1024, 00:18:05.521 "process_max_bandwidth_mb_sec": 0 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_iscsi_set_options", 00:18:05.521 "params": { 00:18:05.521 "timeout_sec": 30 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_nvme_set_options", 00:18:05.521 "params": { 00:18:05.521 "action_on_timeout": "none", 00:18:05.521 "timeout_us": 0, 00:18:05.521 "timeout_admin_us": 0, 00:18:05.521 "keep_alive_timeout_ms": 10000, 00:18:05.521 "arbitration_burst": 0, 00:18:05.521 "low_priority_weight": 0, 00:18:05.521 "medium_priority_weight": 0, 00:18:05.521 "high_priority_weight": 0, 00:18:05.521 "nvme_adminq_poll_period_us": 10000, 00:18:05.521 "nvme_ioq_poll_period_us": 0, 00:18:05.521 "io_queue_requests": 0, 00:18:05.521 "delay_cmd_submit": true, 00:18:05.521 "transport_retry_count": 4, 00:18:05.521 "bdev_retry_count": 3, 00:18:05.521 "transport_ack_timeout": 0, 00:18:05.521 "ctrlr_loss_timeout_sec": 0, 00:18:05.521 "reconnect_delay_sec": 0, 00:18:05.521 "fast_io_fail_timeout_sec": 0, 00:18:05.521 "disable_auto_failback": false, 00:18:05.521 "generate_uuids": false, 00:18:05.521 "transport_tos": 0, 00:18:05.521 "nvme_error_stat": false, 00:18:05.521 "rdma_srq_size": 0, 00:18:05.521 "io_path_stat": false, 00:18:05.521 "allow_accel_sequence": false, 00:18:05.521 "rdma_max_cq_size": 0, 00:18:05.521 "rdma_cm_event_timeout_ms": 0, 00:18:05.521 "dhchap_digests": [ 00:18:05.521 "sha256", 00:18:05.521 "sha384", 00:18:05.521 "sha512" 00:18:05.521 ], 00:18:05.521 "dhchap_dhgroups": [ 00:18:05.521 "null", 00:18:05.521 "ffdhe2048", 00:18:05.521 "ffdhe3072", 00:18:05.521 "ffdhe4096", 00:18:05.521 "ffdhe6144", 00:18:05.521 "ffdhe8192" 00:18:05.521 ] 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_nvme_set_hotplug", 00:18:05.521 "params": { 00:18:05.521 "period_us": 100000, 00:18:05.521 "enable": false 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_malloc_create", 00:18:05.521 "params": { 00:18:05.521 "name": "malloc0", 00:18:05.521 "num_blocks": 8192, 00:18:05.521 "block_size": 4096, 00:18:05.521 "physical_block_size": 4096, 00:18:05.521 "uuid": "09595fe8-0f4f-46eb-b932-f7f504db9306", 00:18:05.521 "optimal_io_boundary": 0, 00:18:05.521 "md_size": 0, 00:18:05.521 "dif_type": 0, 00:18:05.521 "dif_is_head_of_md": false, 00:18:05.521 "dif_pi_format": 0 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "bdev_wait_for_examine" 00:18:05.521 } 00:18:05.521 ] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "nbd", 00:18:05.521 "config": [] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "scheduler", 00:18:05.521 "config": [ 00:18:05.521 { 00:18:05.521 "method": "framework_set_scheduler", 00:18:05.521 "params": { 00:18:05.521 "name": "static" 00:18:05.521 } 00:18:05.521 } 00:18:05.521 ] 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "subsystem": "nvmf", 00:18:05.521 "config": [ 00:18:05.521 { 00:18:05.521 "method": "nvmf_set_config", 00:18:05.521 "params": { 00:18:05.521 "discovery_filter": "match_any", 00:18:05.521 "admin_cmd_passthru": { 00:18:05.521 "identify_ctrlr": false 00:18:05.521 } 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_set_max_subsystems", 00:18:05.521 "params": { 00:18:05.521 "max_subsystems": 1024 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_set_crdt", 00:18:05.521 "params": { 00:18:05.521 "crdt1": 0, 00:18:05.521 "crdt2": 0, 00:18:05.521 "crdt3": 0 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_create_transport", 00:18:05.521 "params": { 00:18:05.521 "trtype": "TCP", 00:18:05.521 "max_queue_depth": 128, 00:18:05.521 "max_io_qpairs_per_ctrlr": 127, 00:18:05.521 "in_capsule_data_size": 4096, 00:18:05.521 "max_io_size": 131072, 00:18:05.521 "io_unit_size": 131072, 00:18:05.521 "max_aq_depth": 128, 00:18:05.521 "num_shared_buffers": 511, 00:18:05.521 "buf_cache_size": 4294967295, 00:18:05.521 "dif_insert_or_strip": false, 00:18:05.521 "zcopy": false, 00:18:05.521 "c2h_success": false, 00:18:05.521 "sock_priority": 0, 00:18:05.521 "abort_timeout_sec": 1, 00:18:05.521 "ack_timeout": 0, 00:18:05.521 "data_wr_pool_size": 0 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_create_subsystem", 00:18:05.521 "params": { 00:18:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.521 "allow_any_host": false, 00:18:05.521 "serial_number": "SPDK00000000000001", 00:18:05.521 "model_number": "SPDK bdev Controller", 00:18:05.521 "max_namespaces": 10, 00:18:05.521 "min_cntlid": 1, 00:18:05.521 "max_cntlid": 65519, 00:18:05.521 "ana_reporting": false 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_subsystem_add_host", 00:18:05.521 "params": { 00:18:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.521 "host": "nqn.2016-06.io.spdk:host1", 00:18:05.521 "psk": "/tmp/tmp.hps9t6l2BG" 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_subsystem_add_ns", 00:18:05.521 "params": { 00:18:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.521 "namespace": { 00:18:05.521 "nsid": 1, 00:18:05.521 "bdev_name": "malloc0", 00:18:05.521 "nguid": "09595FE80F4F46EBB932F7F504DB9306", 00:18:05.521 "uuid": "09595fe8-0f4f-46eb-b932-f7f504db9306", 00:18:05.521 "no_auto_visible": false 00:18:05.521 } 00:18:05.521 } 00:18:05.521 }, 00:18:05.521 { 00:18:05.521 "method": "nvmf_subsystem_add_listener", 00:18:05.521 "params": { 00:18:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.521 "listen_address": { 00:18:05.521 "trtype": "TCP", 00:18:05.521 "adrfam": "IPv4", 00:18:05.521 "traddr": "10.0.0.2", 00:18:05.521 "trsvcid": "4420" 00:18:05.522 }, 00:18:05.522 "secure_channel": true 00:18:05.522 } 00:18:05.522 } 00:18:05.522 ] 00:18:05.522 } 00:18:05.522 ] 00:18:05.522 }' 00:18:05.522 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:05.780 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:05.780 "subsystems": [ 00:18:05.780 { 00:18:05.780 "subsystem": "keyring", 00:18:05.780 "config": [] 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "subsystem": "iobuf", 00:18:05.780 "config": [ 00:18:05.780 { 00:18:05.780 "method": "iobuf_set_options", 00:18:05.780 "params": { 00:18:05.780 "small_pool_count": 8192, 00:18:05.780 "large_pool_count": 1024, 00:18:05.780 "small_bufsize": 8192, 00:18:05.780 "large_bufsize": 135168 00:18:05.780 } 00:18:05.780 } 00:18:05.780 ] 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "subsystem": "sock", 00:18:05.780 "config": [ 00:18:05.780 { 00:18:05.780 "method": "sock_set_default_impl", 00:18:05.780 "params": { 00:18:05.780 "impl_name": "posix" 00:18:05.780 } 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "method": "sock_impl_set_options", 00:18:05.780 "params": { 00:18:05.780 "impl_name": "ssl", 00:18:05.780 "recv_buf_size": 4096, 00:18:05.780 "send_buf_size": 4096, 00:18:05.780 "enable_recv_pipe": true, 00:18:05.780 "enable_quickack": false, 00:18:05.780 "enable_placement_id": 0, 00:18:05.780 "enable_zerocopy_send_server": true, 00:18:05.780 "enable_zerocopy_send_client": false, 00:18:05.780 "zerocopy_threshold": 0, 00:18:05.780 "tls_version": 0, 00:18:05.780 "enable_ktls": false 00:18:05.780 } 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "method": "sock_impl_set_options", 00:18:05.780 "params": { 00:18:05.780 "impl_name": "posix", 00:18:05.780 "recv_buf_size": 2097152, 00:18:05.780 "send_buf_size": 2097152, 00:18:05.780 "enable_recv_pipe": true, 00:18:05.780 "enable_quickack": false, 00:18:05.780 "enable_placement_id": 0, 00:18:05.780 "enable_zerocopy_send_server": true, 00:18:05.780 "enable_zerocopy_send_client": false, 00:18:05.780 "zerocopy_threshold": 0, 00:18:05.780 "tls_version": 0, 00:18:05.780 "enable_ktls": false 00:18:05.780 } 00:18:05.780 } 00:18:05.780 ] 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "subsystem": "vmd", 00:18:05.780 "config": [] 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "subsystem": "accel", 00:18:05.780 "config": [ 00:18:05.780 { 00:18:05.780 "method": "accel_set_options", 00:18:05.780 "params": { 00:18:05.780 "small_cache_size": 128, 00:18:05.780 "large_cache_size": 16, 00:18:05.780 "task_count": 2048, 00:18:05.780 "sequence_count": 2048, 00:18:05.780 "buf_count": 2048 00:18:05.780 } 00:18:05.780 } 00:18:05.780 ] 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "subsystem": "bdev", 00:18:05.780 "config": [ 00:18:05.780 { 00:18:05.780 "method": "bdev_set_options", 00:18:05.780 "params": { 00:18:05.780 "bdev_io_pool_size": 65535, 00:18:05.780 "bdev_io_cache_size": 256, 00:18:05.780 "bdev_auto_examine": true, 00:18:05.780 "iobuf_small_cache_size": 128, 00:18:05.780 "iobuf_large_cache_size": 16 00:18:05.780 } 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "method": "bdev_raid_set_options", 00:18:05.780 "params": { 00:18:05.780 "process_window_size_kb": 1024, 00:18:05.780 "process_max_bandwidth_mb_sec": 0 00:18:05.780 } 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "method": "bdev_iscsi_set_options", 00:18:05.780 "params": { 00:18:05.780 "timeout_sec": 30 00:18:05.780 } 00:18:05.780 }, 00:18:05.780 { 00:18:05.780 "method": "bdev_nvme_set_options", 00:18:05.780 "params": { 00:18:05.780 "action_on_timeout": "none", 00:18:05.780 "timeout_us": 0, 00:18:05.780 "timeout_admin_us": 0, 00:18:05.780 "keep_alive_timeout_ms": 10000, 00:18:05.780 "arbitration_burst": 0, 00:18:05.780 "low_priority_weight": 0, 00:18:05.780 "medium_priority_weight": 0, 00:18:05.781 "high_priority_weight": 0, 00:18:05.781 "nvme_adminq_poll_period_us": 10000, 00:18:05.781 "nvme_ioq_poll_period_us": 0, 00:18:05.781 "io_queue_requests": 512, 00:18:05.781 "delay_cmd_submit": true, 00:18:05.781 "transport_retry_count": 4, 00:18:05.781 "bdev_retry_count": 3, 00:18:05.781 "transport_ack_timeout": 0, 00:18:05.781 "ctrlr_loss_timeout_sec": 0, 00:18:05.781 "reconnect_delay_sec": 0, 00:18:05.781 "fast_io_fail_timeout_sec": 0, 00:18:05.781 "disable_auto_failback": false, 00:18:05.781 "generate_uuids": false, 00:18:05.781 "transport_tos": 0, 00:18:05.781 "nvme_error_stat": false, 00:18:05.781 "rdma_srq_size": 0, 00:18:05.781 "io_path_stat": false, 00:18:05.781 "allow_accel_sequence": false, 00:18:05.781 "rdma_max_cq_size": 0, 00:18:05.781 "rdma_cm_event_timeout_ms": 0, 00:18:05.781 "dhchap_digests": [ 00:18:05.781 "sha256", 00:18:05.781 "sha384", 00:18:05.781 "sha512" 00:18:05.781 ], 00:18:05.781 "dhchap_dhgroups": [ 00:18:05.781 "null", 00:18:05.781 "ffdhe2048", 00:18:05.781 "ffdhe3072", 00:18:05.781 "ffdhe4096", 00:18:05.781 "ffdhe6144", 00:18:05.781 "ffdhe8192" 00:18:05.781 ] 00:18:05.781 } 00:18:05.781 }, 00:18:05.781 { 00:18:05.781 "method": "bdev_nvme_attach_controller", 00:18:05.781 "params": { 00:18:05.781 "name": "TLSTEST", 00:18:05.781 "trtype": "TCP", 00:18:05.781 "adrfam": "IPv4", 00:18:05.781 "traddr": "10.0.0.2", 00:18:05.781 "trsvcid": "4420", 00:18:05.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.781 "prchk_reftag": false, 00:18:05.781 "prchk_guard": false, 00:18:05.781 "ctrlr_loss_timeout_sec": 0, 00:18:05.781 "reconnect_delay_sec": 0, 00:18:05.781 "fast_io_fail_timeout_sec": 0, 00:18:05.781 "psk": "/tmp/tmp.hps9t6l2BG", 00:18:05.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.781 "hdgst": false, 00:18:05.781 "ddgst": false 00:18:05.781 } 00:18:05.781 }, 00:18:05.781 { 00:18:05.781 "method": "bdev_nvme_set_hotplug", 00:18:05.781 "params": { 00:18:05.781 "period_us": 100000, 00:18:05.781 "enable": false 00:18:05.781 } 00:18:05.781 }, 00:18:05.781 { 00:18:05.781 "method": "bdev_wait_for_examine" 00:18:05.781 } 00:18:05.781 ] 00:18:05.781 }, 00:18:05.781 { 00:18:05.781 "subsystem": "nbd", 00:18:05.781 "config": [] 00:18:05.781 } 00:18:05.781 ] 00:18:05.781 }' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1522170 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1522170 ']' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1522170 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522170 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522170' 00:18:05.781 killing process with pid 1522170 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1522170 00:18:05.781 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.781 00:18:05.781 Latency(us) 00:18:05.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.781 =================================================================================================================== 00:18:05.781 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.781 [2024-07-25 10:24:55.358239] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1522170 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1521950 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1521950 ']' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1521950 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.781 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1521950 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1521950' 00:18:06.040 killing process with pid 1521950 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1521950 00:18:06.040 [2024-07-25 10:24:55.574686] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1521950 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.040 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:06.040 "subsystems": [ 00:18:06.040 { 00:18:06.040 "subsystem": "keyring", 00:18:06.040 "config": [] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "iobuf", 00:18:06.040 "config": [ 00:18:06.040 { 00:18:06.040 "method": "iobuf_set_options", 00:18:06.040 "params": { 00:18:06.040 "small_pool_count": 8192, 00:18:06.040 "large_pool_count": 1024, 00:18:06.040 "small_bufsize": 8192, 00:18:06.040 "large_bufsize": 135168 00:18:06.040 } 00:18:06.040 } 00:18:06.040 ] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "sock", 00:18:06.040 "config": [ 00:18:06.040 { 00:18:06.040 "method": "sock_set_default_impl", 00:18:06.040 "params": { 00:18:06.040 "impl_name": "posix" 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "sock_impl_set_options", 00:18:06.040 "params": { 00:18:06.040 "impl_name": "ssl", 00:18:06.040 "recv_buf_size": 4096, 00:18:06.040 "send_buf_size": 4096, 00:18:06.040 "enable_recv_pipe": true, 00:18:06.040 "enable_quickack": false, 00:18:06.040 "enable_placement_id": 0, 00:18:06.040 "enable_zerocopy_send_server": true, 00:18:06.040 "enable_zerocopy_send_client": false, 00:18:06.040 "zerocopy_threshold": 0, 00:18:06.040 "tls_version": 0, 00:18:06.040 "enable_ktls": false 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "sock_impl_set_options", 00:18:06.040 "params": { 00:18:06.040 "impl_name": "posix", 00:18:06.040 "recv_buf_size": 2097152, 00:18:06.040 "send_buf_size": 2097152, 00:18:06.040 "enable_recv_pipe": true, 00:18:06.040 "enable_quickack": false, 00:18:06.040 "enable_placement_id": 0, 00:18:06.040 "enable_zerocopy_send_server": true, 00:18:06.040 "enable_zerocopy_send_client": false, 00:18:06.040 "zerocopy_threshold": 0, 00:18:06.040 "tls_version": 0, 00:18:06.040 "enable_ktls": false 00:18:06.040 } 00:18:06.040 } 00:18:06.040 ] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "vmd", 00:18:06.040 "config": [] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "accel", 00:18:06.040 "config": [ 00:18:06.040 { 00:18:06.040 "method": "accel_set_options", 00:18:06.040 "params": { 00:18:06.040 "small_cache_size": 128, 00:18:06.040 "large_cache_size": 16, 00:18:06.040 "task_count": 2048, 00:18:06.040 "sequence_count": 2048, 00:18:06.040 "buf_count": 2048 00:18:06.040 } 00:18:06.040 } 00:18:06.040 ] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "bdev", 00:18:06.040 "config": [ 00:18:06.040 { 00:18:06.040 "method": "bdev_set_options", 00:18:06.040 "params": { 00:18:06.040 "bdev_io_pool_size": 65535, 00:18:06.040 "bdev_io_cache_size": 256, 00:18:06.040 "bdev_auto_examine": true, 00:18:06.040 "iobuf_small_cache_size": 128, 00:18:06.040 "iobuf_large_cache_size": 16 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_raid_set_options", 00:18:06.040 "params": { 00:18:06.040 "process_window_size_kb": 1024, 00:18:06.040 "process_max_bandwidth_mb_sec": 0 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_iscsi_set_options", 00:18:06.040 "params": { 00:18:06.040 "timeout_sec": 30 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_nvme_set_options", 00:18:06.040 "params": { 00:18:06.040 "action_on_timeout": "none", 00:18:06.040 "timeout_us": 0, 00:18:06.040 "timeout_admin_us": 0, 00:18:06.040 "keep_alive_timeout_ms": 10000, 00:18:06.040 "arbitration_burst": 0, 00:18:06.040 "low_priority_weight": 0, 00:18:06.040 "medium_priority_weight": 0, 00:18:06.040 "high_priority_weight": 0, 00:18:06.040 "nvme_adminq_poll_period_us": 10000, 00:18:06.040 "nvme_ioq_poll_period_us": 0, 00:18:06.040 "io_queue_requests": 0, 00:18:06.040 "delay_cmd_submit": true, 00:18:06.040 "transport_retry_count": 4, 00:18:06.040 "bdev_retry_count": 3, 00:18:06.040 "transport_ack_timeout": 0, 00:18:06.040 "ctrlr_loss_timeout_sec": 0, 00:18:06.040 "reconnect_delay_sec": 0, 00:18:06.040 "fast_io_fail_timeout_sec": 0, 00:18:06.040 "disable_auto_failback": false, 00:18:06.040 "generate_uuids": false, 00:18:06.040 "transport_tos": 0, 00:18:06.040 "nvme_error_stat": false, 00:18:06.040 "rdma_srq_size": 0, 00:18:06.040 "io_path_stat": false, 00:18:06.040 "allow_accel_sequence": false, 00:18:06.040 "rdma_max_cq_size": 0, 00:18:06.040 "rdma_cm_event_timeout_ms": 0, 00:18:06.040 "dhchap_digests": [ 00:18:06.040 "sha256", 00:18:06.040 "sha384", 00:18:06.040 "sha512" 00:18:06.040 ], 00:18:06.040 "dhchap_dhgroups": [ 00:18:06.040 "null", 00:18:06.040 "ffdhe2048", 00:18:06.040 "ffdhe3072", 00:18:06.040 "ffdhe4096", 00:18:06.040 "ffdhe6144", 00:18:06.040 "ffdhe8192" 00:18:06.040 ] 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_nvme_set_hotplug", 00:18:06.040 "params": { 00:18:06.040 "period_us": 100000, 00:18:06.040 "enable": false 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_malloc_create", 00:18:06.040 "params": { 00:18:06.040 "name": "malloc0", 00:18:06.040 "num_blocks": 8192, 00:18:06.040 "block_size": 4096, 00:18:06.040 "physical_block_size": 4096, 00:18:06.040 "uuid": "09595fe8-0f4f-46eb-b932-f7f504db9306", 00:18:06.040 "optimal_io_boundary": 0, 00:18:06.040 "md_size": 0, 00:18:06.040 "dif_type": 0, 00:18:06.040 "dif_is_head_of_md": false, 00:18:06.040 "dif_pi_format": 0 00:18:06.040 } 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "method": "bdev_wait_for_examine" 00:18:06.040 } 00:18:06.040 ] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "nbd", 00:18:06.040 "config": [] 00:18:06.040 }, 00:18:06.040 { 00:18:06.040 "subsystem": "scheduler", 00:18:06.040 "config": [ 00:18:06.040 { 00:18:06.040 "method": "framework_set_scheduler", 00:18:06.041 "params": { 00:18:06.041 "name": "static" 00:18:06.041 } 00:18:06.041 } 00:18:06.041 ] 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "subsystem": "nvmf", 00:18:06.041 "config": [ 00:18:06.041 { 00:18:06.041 "method": "nvmf_set_config", 00:18:06.041 "params": { 00:18:06.041 "discovery_filter": "match_any", 00:18:06.041 "admin_cmd_passthru": { 00:18:06.041 "identify_ctrlr": false 00:18:06.041 } 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_set_max_subsystems", 00:18:06.041 "params": { 00:18:06.041 "max_subsystems": 1024 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_set_crdt", 00:18:06.041 "params": { 00:18:06.041 "crdt1": 0, 00:18:06.041 "crdt2": 0, 00:18:06.041 "crdt3": 0 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_create_transport", 00:18:06.041 "params": { 00:18:06.041 "trtype": "TCP", 00:18:06.041 "max_queue_depth": 128, 00:18:06.041 "max_io_qpairs_per_ctrlr": 127, 00:18:06.041 "in_capsule_data_size": 4096, 00:18:06.041 "max_io_size": 131072, 00:18:06.041 "io_unit_size": 131072, 00:18:06.041 "max_aq_depth": 128, 00:18:06.041 "num_shared_buffers": 511, 00:18:06.041 "buf_cache_size": 4294967295, 00:18:06.041 "dif_insert_or_strip": false, 00:18:06.041 "zcopy": false, 00:18:06.041 "c2h_success": false, 00:18:06.041 "sock_priority": 0, 00:18:06.041 "abort_timeout_sec": 1, 00:18:06.041 "ack_timeout": 0, 00:18:06.041 "data_wr_pool_size": 0 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_create_subsystem", 00:18:06.041 "params": { 00:18:06.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.041 "allow_any_host": false, 00:18:06.041 "serial_number": "SPDK00000000000001", 00:18:06.041 "model_number": "SPDK bdev Controller", 00:18:06.041 "max_namespaces": 10, 00:18:06.041 "min_cntlid": 1, 00:18:06.041 "max_cntlid": 65519, 00:18:06.041 "ana_reporting": false 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_subsystem_add_host", 00:18:06.041 "params": { 00:18:06.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.041 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.041 "psk": "/tmp/tmp.hps9t6l2BG" 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_subsystem_add_ns", 00:18:06.041 "params": { 00:18:06.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.041 "namespace": { 00:18:06.041 "nsid": 1, 00:18:06.041 "bdev_name": "malloc0", 00:18:06.041 "nguid": "09595FE80F4F46EBB932F7F504DB9306", 00:18:06.041 "uuid": "09595fe8-0f4f-46eb-b932-f7f504db9306", 00:18:06.041 "no_auto_visible": false 00:18:06.041 } 00:18:06.041 } 00:18:06.041 }, 00:18:06.041 { 00:18:06.041 "method": "nvmf_subsystem_add_listener", 00:18:06.041 "params": { 00:18:06.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.041 "listen_address": { 00:18:06.041 "trtype": "TCP", 00:18:06.041 "adrfam": "IPv4", 00:18:06.041 "traddr": "10.0.0.2", 00:18:06.041 "trsvcid": "4420" 00:18:06.041 }, 00:18:06.041 "secure_channel": true 00:18:06.041 } 00:18:06.041 } 00:18:06.041 ] 00:18:06.041 } 00:18:06.041 ] 00:18:06.041 }' 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1522300 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1522300 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1522300 ']' 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.041 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.299 [2024-07-25 10:24:55.818548] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:06.299 [2024-07-25 10:24:55.818646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.299 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.299 [2024-07-25 10:24:55.877319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.299 [2024-07-25 10:24:55.972014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.299 [2024-07-25 10:24:55.972068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.299 [2024-07-25 10:24:55.972081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.299 [2024-07-25 10:24:55.972112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.299 [2024-07-25 10:24:55.972124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.299 [2024-07-25 10:24:55.972208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.557 [2024-07-25 10:24:56.183965] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.557 [2024-07-25 10:24:56.218946] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:06.557 [2024-07-25 10:24:56.235008] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.557 [2024-07-25 10:24:56.235234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1522417 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1522417 /var/tmp/bdevperf.sock 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1522417 ']' 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:07.123 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:07.123 "subsystems": [ 00:18:07.123 { 00:18:07.123 "subsystem": "keyring", 00:18:07.123 "config": [] 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "subsystem": "iobuf", 00:18:07.123 "config": [ 00:18:07.123 { 00:18:07.123 "method": "iobuf_set_options", 00:18:07.123 "params": { 00:18:07.123 "small_pool_count": 8192, 00:18:07.123 "large_pool_count": 1024, 00:18:07.123 "small_bufsize": 8192, 00:18:07.123 "large_bufsize": 135168 00:18:07.123 } 00:18:07.123 } 00:18:07.123 ] 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "subsystem": "sock", 00:18:07.123 "config": [ 00:18:07.123 { 00:18:07.123 "method": "sock_set_default_impl", 00:18:07.123 "params": { 00:18:07.123 "impl_name": "posix" 00:18:07.123 } 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "method": "sock_impl_set_options", 00:18:07.123 "params": { 00:18:07.123 "impl_name": "ssl", 00:18:07.123 "recv_buf_size": 4096, 00:18:07.123 "send_buf_size": 4096, 00:18:07.123 "enable_recv_pipe": true, 00:18:07.123 "enable_quickack": false, 00:18:07.123 "enable_placement_id": 0, 00:18:07.123 "enable_zerocopy_send_server": true, 00:18:07.123 "enable_zerocopy_send_client": false, 00:18:07.123 "zerocopy_threshold": 0, 00:18:07.123 "tls_version": 0, 00:18:07.123 "enable_ktls": false 00:18:07.123 } 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "method": "sock_impl_set_options", 00:18:07.123 "params": { 00:18:07.123 "impl_name": "posix", 00:18:07.123 "recv_buf_size": 2097152, 00:18:07.123 "send_buf_size": 2097152, 00:18:07.123 "enable_recv_pipe": true, 00:18:07.123 "enable_quickack": false, 00:18:07.123 "enable_placement_id": 0, 00:18:07.123 "enable_zerocopy_send_server": true, 00:18:07.123 "enable_zerocopy_send_client": false, 00:18:07.123 "zerocopy_threshold": 0, 00:18:07.123 "tls_version": 0, 00:18:07.123 "enable_ktls": false 00:18:07.123 } 00:18:07.123 } 00:18:07.123 ] 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "subsystem": "vmd", 00:18:07.123 "config": [] 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "subsystem": "accel", 00:18:07.123 "config": [ 00:18:07.123 { 00:18:07.123 "method": "accel_set_options", 00:18:07.123 "params": { 00:18:07.123 "small_cache_size": 128, 00:18:07.123 "large_cache_size": 16, 00:18:07.123 "task_count": 2048, 00:18:07.123 "sequence_count": 2048, 00:18:07.123 "buf_count": 2048 00:18:07.123 } 00:18:07.123 } 00:18:07.123 ] 00:18:07.123 }, 00:18:07.123 { 00:18:07.123 "subsystem": "bdev", 00:18:07.123 "config": [ 00:18:07.123 { 00:18:07.123 "method": "bdev_set_options", 00:18:07.123 "params": { 00:18:07.123 "bdev_io_pool_size": 65535, 00:18:07.123 "bdev_io_cache_size": 256, 00:18:07.123 "bdev_auto_examine": true, 00:18:07.124 "iobuf_small_cache_size": 128, 00:18:07.124 "iobuf_large_cache_size": 16 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_raid_set_options", 00:18:07.124 "params": { 00:18:07.124 "process_window_size_kb": 1024, 00:18:07.124 "process_max_bandwidth_mb_sec": 0 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_iscsi_set_options", 00:18:07.124 "params": { 00:18:07.124 "timeout_sec": 30 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_nvme_set_options", 00:18:07.124 "params": { 00:18:07.124 "action_on_timeout": "none", 00:18:07.124 "timeout_us": 0, 00:18:07.124 "timeout_admin_us": 0, 00:18:07.124 "keep_alive_timeout_ms": 10000, 00:18:07.124 "arbitration_burst": 0, 00:18:07.124 "low_priority_weight": 0, 00:18:07.124 "medium_priority_weight": 0, 00:18:07.124 "high_priority_weight": 0, 00:18:07.124 "nvme_adminq_poll_period_us": 10000, 00:18:07.124 "nvme_ioq_poll_period_us": 0, 00:18:07.124 "io_queue_requests": 512, 00:18:07.124 "delay_cmd_submit": true, 00:18:07.124 "transport_retry_count": 4, 00:18:07.124 "bdev_retry_count": 3, 00:18:07.124 "transport_ack_timeout": 0, 00:18:07.124 "ctrlr_loss_timeout_sec": 0, 00:18:07.124 "reconnect_delay_sec": 0, 00:18:07.124 "fast_io_fail_timeout_sec": 0, 00:18:07.124 "disable_auto_failback": false, 00:18:07.124 "generate_uuids": false, 00:18:07.124 "transport_tos": 0, 00:18:07.124 "nvme_error_stat": false, 00:18:07.124 "rdma_srq_size": 0, 00:18:07.124 "io_path_stat": false, 00:18:07.124 "allow_accel_sequence": false, 00:18:07.124 "rdma_max_cq_size": 0, 00:18:07.124 "rdma_cm_event_timeout_ms": 0, 00:18:07.124 "dhchap_digests": [ 00:18:07.124 "sha256", 00:18:07.124 "sha384", 00:18:07.124 "sha512" 00:18:07.124 ], 00:18:07.124 "dhchap_dhgroups": [ 00:18:07.124 "null", 00:18:07.124 "ffdhe2048", 00:18:07.124 "ffdhe3072", 00:18:07.124 "ffdhe4096", 00:18:07.124 "ffdhe6144", 00:18:07.124 "ffdhe8192" 00:18:07.124 ] 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_nvme_attach_controller", 00:18:07.124 "params": { 00:18:07.124 "name": "TLSTEST", 00:18:07.124 "trtype": "TCP", 00:18:07.124 "adrfam": "IPv4", 00:18:07.124 "traddr": "10.0.0.2", 00:18:07.124 "trsvcid": "4420", 00:18:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.124 "prchk_reftag": false, 00:18:07.124 "prchk_guard": false, 00:18:07.124 "ctrlr_loss_timeout_sec": 0, 00:18:07.124 "reconnect_delay_sec": 0, 00:18:07.124 "fast_io_fail_timeout_sec": 0, 00:18:07.124 "psk": "/tmp/tmp.hps9t6l2BG", 00:18:07.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.124 "hdgst": false, 00:18:07.124 "ddgst": false 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_nvme_set_hotplug", 00:18:07.124 "params": { 00:18:07.124 "period_us": 100000, 00:18:07.124 "enable": false 00:18:07.124 } 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "method": "bdev_wait_for_examine" 00:18:07.124 } 00:18:07.124 ] 00:18:07.124 }, 00:18:07.124 { 00:18:07.124 "subsystem": "nbd", 00:18:07.124 "config": [] 00:18:07.124 } 00:18:07.124 ] 00:18:07.124 }' 00:18:07.382 [2024-07-25 10:24:56.911630] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:07.382 [2024-07-25 10:24:56.911727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522417 ] 00:18:07.382 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.382 [2024-07-25 10:24:56.970230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.382 [2024-07-25 10:24:57.075607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.640 [2024-07-25 10:24:57.227690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.640 [2024-07-25 10:24:57.227797] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:08.205 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.206 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:08.206 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:08.463 Running I/O for 10 seconds... 00:18:18.522 00:18:18.522 Latency(us) 00:18:18.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:18.522 Verification LBA range: start 0x0 length 0x2000 00:18:18.522 TLSTESTn1 : 10.04 2967.30 11.59 0.00 0.00 43046.32 6068.15 239230.67 00:18:18.522 =================================================================================================================== 00:18:18.522 Total : 2967.30 11.59 0.00 0.00 43046.32 6068.15 239230.67 00:18:18.522 0 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1522417 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1522417 ']' 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1522417 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522417 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522417' 00:18:18.522 killing process with pid 1522417 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1522417 00:18:18.522 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.522 00:18:18.522 Latency(us) 00:18:18.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.522 =================================================================================================================== 00:18:18.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.522 [2024-07-25 10:25:08.128444] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.522 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1522417 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1522300 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1522300 ']' 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1522300 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522300 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522300' 00:18:18.780 killing process with pid 1522300 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1522300 00:18:18.780 [2024-07-25 10:25:08.342306] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1522300 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1523514 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1523514 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1523514 ']' 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.780 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.039 [2024-07-25 10:25:08.594617] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:19.039 [2024-07-25 10:25:08.594718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.039 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.039 [2024-07-25 10:25:08.658701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.039 [2024-07-25 10:25:08.773231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.039 [2024-07-25 10:25:08.773298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.039 [2024-07-25 10:25:08.773314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.039 [2024-07-25 10:25:08.773328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.039 [2024-07-25 10:25:08.773339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.039 [2024-07-25 10:25:08.773369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.hps9t6l2BG 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hps9t6l2BG 00:18:19.297 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.555 [2024-07-25 10:25:09.172416] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.555 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.824 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.082 [2024-07-25 10:25:09.758020] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.082 [2024-07-25 10:25:09.758258] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.082 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.340 malloc0 00:18:20.340 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hps9t6l2BG 00:18:20.907 [2024-07-25 10:25:10.658323] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1523742 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1523742 /var/tmp/bdevperf.sock 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1523742 ']' 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.907 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.165 [2024-07-25 10:25:10.727442] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:21.165 [2024-07-25 10:25:10.727550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523742 ] 00:18:21.165 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.165 [2024-07-25 10:25:10.788409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.165 [2024-07-25 10:25:10.905073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.423 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.423 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:21.423 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hps9t6l2BG 00:18:21.681 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:21.939 [2024-07-25 10:25:11.582347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.939 nvme0n1 00:18:21.939 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.197 Running I/O for 1 seconds... 00:18:23.130 00:18:23.130 Latency(us) 00:18:23.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.130 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:23.130 Verification LBA range: start 0x0 length 0x2000 00:18:23.130 nvme0n1 : 1.03 2941.98 11.49 0.00 0.00 42979.14 8349.77 42137.22 00:18:23.130 =================================================================================================================== 00:18:23.130 Total : 2941.98 11.49 0.00 0.00 42979.14 8349.77 42137.22 00:18:23.130 0 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1523742 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1523742 ']' 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1523742 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523742 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523742' 00:18:23.130 killing process with pid 1523742 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1523742 00:18:23.130 Received shutdown signal, test time was about 1.000000 seconds 00:18:23.130 00:18:23.130 Latency(us) 00:18:23.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.130 =================================================================================================================== 00:18:23.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.130 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1523742 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1523514 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1523514 ']' 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1523514 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523514 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523514' 00:18:23.389 killing process with pid 1523514 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1523514 00:18:23.389 [2024-07-25 10:25:13.114286] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:23.389 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1523514 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1523958 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1523958 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1523958 ']' 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.648 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.648 [2024-07-25 10:25:13.395766] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:23.648 [2024-07-25 10:25:13.395869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.905 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.905 [2024-07-25 10:25:13.460545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.905 [2024-07-25 10:25:13.576971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.905 [2024-07-25 10:25:13.577029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.905 [2024-07-25 10:25:13.577045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.905 [2024-07-25 10:25:13.577059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.905 [2024-07-25 10:25:13.577070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.905 [2024-07-25 10:25:13.577100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.905 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.905 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:23.905 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.905 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.905 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.163 [2024-07-25 10:25:13.707194] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.163 malloc0 00:18:24.163 [2024-07-25 10:25:13.737873] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.163 [2024-07-25 10:25:13.748692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1523991 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1523991 /var/tmp/bdevperf.sock 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1523991 ']' 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.163 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.163 [2024-07-25 10:25:13.817009] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:24.163 [2024-07-25 10:25:13.817087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523991 ] 00:18:24.163 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.163 [2024-07-25 10:25:13.872745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.421 [2024-07-25 10:25:13.989652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.421 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.421 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:24.421 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hps9t6l2BG 00:18:24.679 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:24.937 [2024-07-25 10:25:14.556127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.937 nvme0n1 00:18:24.937 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.196 Running I/O for 1 seconds... 00:18:26.132 00:18:26.132 Latency(us) 00:18:26.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.132 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.132 Verification LBA range: start 0x0 length 0x2000 00:18:26.132 nvme0n1 : 1.02 3094.83 12.09 0.00 0.00 40838.92 7087.60 40195.41 00:18:26.132 =================================================================================================================== 00:18:26.132 Total : 3094.83 12.09 0.00 0.00 40838.92 7087.60 40195.41 00:18:26.132 0 00:18:26.132 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:26.132 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.132 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.132 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.132 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:26.132 "subsystems": [ 00:18:26.132 { 00:18:26.132 "subsystem": "keyring", 00:18:26.132 "config": [ 00:18:26.132 { 00:18:26.132 "method": "keyring_file_add_key", 00:18:26.132 "params": { 00:18:26.132 "name": "key0", 00:18:26.132 "path": "/tmp/tmp.hps9t6l2BG" 00:18:26.132 } 00:18:26.132 } 00:18:26.132 ] 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "subsystem": "iobuf", 00:18:26.132 "config": [ 00:18:26.132 { 00:18:26.132 "method": "iobuf_set_options", 00:18:26.132 "params": { 00:18:26.132 "small_pool_count": 8192, 00:18:26.132 "large_pool_count": 1024, 00:18:26.132 "small_bufsize": 8192, 00:18:26.132 "large_bufsize": 135168 00:18:26.132 } 00:18:26.132 } 00:18:26.132 ] 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "subsystem": "sock", 00:18:26.132 "config": [ 00:18:26.132 { 00:18:26.132 "method": "sock_set_default_impl", 00:18:26.132 "params": { 00:18:26.132 "impl_name": "posix" 00:18:26.132 } 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "method": "sock_impl_set_options", 00:18:26.132 "params": { 00:18:26.132 "impl_name": "ssl", 00:18:26.132 "recv_buf_size": 4096, 00:18:26.132 "send_buf_size": 4096, 00:18:26.132 "enable_recv_pipe": true, 00:18:26.132 "enable_quickack": false, 00:18:26.132 "enable_placement_id": 0, 00:18:26.132 "enable_zerocopy_send_server": true, 00:18:26.132 "enable_zerocopy_send_client": false, 00:18:26.132 "zerocopy_threshold": 0, 00:18:26.132 "tls_version": 0, 00:18:26.132 "enable_ktls": false 00:18:26.132 } 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "method": "sock_impl_set_options", 00:18:26.132 "params": { 00:18:26.132 "impl_name": "posix", 00:18:26.132 "recv_buf_size": 2097152, 00:18:26.132 "send_buf_size": 2097152, 00:18:26.132 "enable_recv_pipe": true, 00:18:26.132 "enable_quickack": false, 00:18:26.132 "enable_placement_id": 0, 00:18:26.132 "enable_zerocopy_send_server": true, 00:18:26.132 "enable_zerocopy_send_client": false, 00:18:26.132 "zerocopy_threshold": 0, 00:18:26.132 "tls_version": 0, 00:18:26.132 "enable_ktls": false 00:18:26.132 } 00:18:26.132 } 00:18:26.132 ] 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "subsystem": "vmd", 00:18:26.132 "config": [] 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "subsystem": "accel", 00:18:26.132 "config": [ 00:18:26.132 { 00:18:26.132 "method": "accel_set_options", 00:18:26.132 "params": { 00:18:26.132 "small_cache_size": 128, 00:18:26.132 "large_cache_size": 16, 00:18:26.132 "task_count": 2048, 00:18:26.132 "sequence_count": 2048, 00:18:26.132 "buf_count": 2048 00:18:26.132 } 00:18:26.132 } 00:18:26.132 ] 00:18:26.132 }, 00:18:26.132 { 00:18:26.132 "subsystem": "bdev", 00:18:26.132 "config": [ 00:18:26.132 { 00:18:26.132 "method": "bdev_set_options", 00:18:26.132 "params": { 00:18:26.132 "bdev_io_pool_size": 65535, 00:18:26.132 "bdev_io_cache_size": 256, 00:18:26.132 "bdev_auto_examine": true, 00:18:26.133 "iobuf_small_cache_size": 128, 00:18:26.133 "iobuf_large_cache_size": 16 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_raid_set_options", 00:18:26.133 "params": { 00:18:26.133 "process_window_size_kb": 1024, 00:18:26.133 "process_max_bandwidth_mb_sec": 0 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_iscsi_set_options", 00:18:26.133 "params": { 00:18:26.133 "timeout_sec": 30 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_nvme_set_options", 00:18:26.133 "params": { 00:18:26.133 "action_on_timeout": "none", 00:18:26.133 "timeout_us": 0, 00:18:26.133 "timeout_admin_us": 0, 00:18:26.133 "keep_alive_timeout_ms": 10000, 00:18:26.133 "arbitration_burst": 0, 00:18:26.133 "low_priority_weight": 0, 00:18:26.133 "medium_priority_weight": 0, 00:18:26.133 "high_priority_weight": 0, 00:18:26.133 "nvme_adminq_poll_period_us": 10000, 00:18:26.133 "nvme_ioq_poll_period_us": 0, 00:18:26.133 "io_queue_requests": 0, 00:18:26.133 "delay_cmd_submit": true, 00:18:26.133 "transport_retry_count": 4, 00:18:26.133 "bdev_retry_count": 3, 00:18:26.133 "transport_ack_timeout": 0, 00:18:26.133 "ctrlr_loss_timeout_sec": 0, 00:18:26.133 "reconnect_delay_sec": 0, 00:18:26.133 "fast_io_fail_timeout_sec": 0, 00:18:26.133 "disable_auto_failback": false, 00:18:26.133 "generate_uuids": false, 00:18:26.133 "transport_tos": 0, 00:18:26.133 "nvme_error_stat": false, 00:18:26.133 "rdma_srq_size": 0, 00:18:26.133 "io_path_stat": false, 00:18:26.133 "allow_accel_sequence": false, 00:18:26.133 "rdma_max_cq_size": 0, 00:18:26.133 "rdma_cm_event_timeout_ms": 0, 00:18:26.133 "dhchap_digests": [ 00:18:26.133 "sha256", 00:18:26.133 "sha384", 00:18:26.133 "sha512" 00:18:26.133 ], 00:18:26.133 "dhchap_dhgroups": [ 00:18:26.133 "null", 00:18:26.133 "ffdhe2048", 00:18:26.133 "ffdhe3072", 00:18:26.133 "ffdhe4096", 00:18:26.133 "ffdhe6144", 00:18:26.133 "ffdhe8192" 00:18:26.133 ] 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_nvme_set_hotplug", 00:18:26.133 "params": { 00:18:26.133 "period_us": 100000, 00:18:26.133 "enable": false 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_malloc_create", 00:18:26.133 "params": { 00:18:26.133 "name": "malloc0", 00:18:26.133 "num_blocks": 8192, 00:18:26.133 "block_size": 4096, 00:18:26.133 "physical_block_size": 4096, 00:18:26.133 "uuid": "7942c196-dcc6-491e-ab7d-580a7d043ffc", 00:18:26.133 "optimal_io_boundary": 0, 00:18:26.133 "md_size": 0, 00:18:26.133 "dif_type": 0, 00:18:26.133 "dif_is_head_of_md": false, 00:18:26.133 "dif_pi_format": 0 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "bdev_wait_for_examine" 00:18:26.133 } 00:18:26.133 ] 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "subsystem": "nbd", 00:18:26.133 "config": [] 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "subsystem": "scheduler", 00:18:26.133 "config": [ 00:18:26.133 { 00:18:26.133 "method": "framework_set_scheduler", 00:18:26.133 "params": { 00:18:26.133 "name": "static" 00:18:26.133 } 00:18:26.133 } 00:18:26.133 ] 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "subsystem": "nvmf", 00:18:26.133 "config": [ 00:18:26.133 { 00:18:26.133 "method": "nvmf_set_config", 00:18:26.133 "params": { 00:18:26.133 "discovery_filter": "match_any", 00:18:26.133 "admin_cmd_passthru": { 00:18:26.133 "identify_ctrlr": false 00:18:26.133 } 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_set_max_subsystems", 00:18:26.133 "params": { 00:18:26.133 "max_subsystems": 1024 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_set_crdt", 00:18:26.133 "params": { 00:18:26.133 "crdt1": 0, 00:18:26.133 "crdt2": 0, 00:18:26.133 "crdt3": 0 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_create_transport", 00:18:26.133 "params": { 00:18:26.133 "trtype": "TCP", 00:18:26.133 "max_queue_depth": 128, 00:18:26.133 "max_io_qpairs_per_ctrlr": 127, 00:18:26.133 "in_capsule_data_size": 4096, 00:18:26.133 "max_io_size": 131072, 00:18:26.133 "io_unit_size": 131072, 00:18:26.133 "max_aq_depth": 128, 00:18:26.133 "num_shared_buffers": 511, 00:18:26.133 "buf_cache_size": 4294967295, 00:18:26.133 "dif_insert_or_strip": false, 00:18:26.133 "zcopy": false, 00:18:26.133 "c2h_success": false, 00:18:26.133 "sock_priority": 0, 00:18:26.133 "abort_timeout_sec": 1, 00:18:26.133 "ack_timeout": 0, 00:18:26.133 "data_wr_pool_size": 0 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_create_subsystem", 00:18:26.133 "params": { 00:18:26.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.133 "allow_any_host": false, 00:18:26.133 "serial_number": "00000000000000000000", 00:18:26.133 "model_number": "SPDK bdev Controller", 00:18:26.133 "max_namespaces": 32, 00:18:26.133 "min_cntlid": 1, 00:18:26.133 "max_cntlid": 65519, 00:18:26.133 "ana_reporting": false 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_subsystem_add_host", 00:18:26.133 "params": { 00:18:26.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.133 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.133 "psk": "key0" 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_subsystem_add_ns", 00:18:26.133 "params": { 00:18:26.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.133 "namespace": { 00:18:26.133 "nsid": 1, 00:18:26.133 "bdev_name": "malloc0", 00:18:26.133 "nguid": "7942C196DCC6491EAB7D580A7D043FFC", 00:18:26.133 "uuid": "7942c196-dcc6-491e-ab7d-580a7d043ffc", 00:18:26.133 "no_auto_visible": false 00:18:26.133 } 00:18:26.133 } 00:18:26.133 }, 00:18:26.133 { 00:18:26.133 "method": "nvmf_subsystem_add_listener", 00:18:26.133 "params": { 00:18:26.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.133 "listen_address": { 00:18:26.133 "trtype": "TCP", 00:18:26.133 "adrfam": "IPv4", 00:18:26.133 "traddr": "10.0.0.2", 00:18:26.133 "trsvcid": "4420" 00:18:26.133 }, 00:18:26.133 "secure_channel": false, 00:18:26.133 "sock_impl": "ssl" 00:18:26.133 } 00:18:26.133 } 00:18:26.133 ] 00:18:26.133 } 00:18:26.133 ] 00:18:26.133 }' 00:18:26.133 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:26.700 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:26.700 "subsystems": [ 00:18:26.700 { 00:18:26.700 "subsystem": "keyring", 00:18:26.700 "config": [ 00:18:26.700 { 00:18:26.700 "method": "keyring_file_add_key", 00:18:26.700 "params": { 00:18:26.700 "name": "key0", 00:18:26.700 "path": "/tmp/tmp.hps9t6l2BG" 00:18:26.700 } 00:18:26.700 } 00:18:26.700 ] 00:18:26.700 }, 00:18:26.700 { 00:18:26.700 "subsystem": "iobuf", 00:18:26.700 "config": [ 00:18:26.700 { 00:18:26.700 "method": "iobuf_set_options", 00:18:26.700 "params": { 00:18:26.700 "small_pool_count": 8192, 00:18:26.700 "large_pool_count": 1024, 00:18:26.700 "small_bufsize": 8192, 00:18:26.700 "large_bufsize": 135168 00:18:26.700 } 00:18:26.700 } 00:18:26.700 ] 00:18:26.700 }, 00:18:26.700 { 00:18:26.700 "subsystem": "sock", 00:18:26.701 "config": [ 00:18:26.701 { 00:18:26.701 "method": "sock_set_default_impl", 00:18:26.701 "params": { 00:18:26.701 "impl_name": "posix" 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "sock_impl_set_options", 00:18:26.701 "params": { 00:18:26.701 "impl_name": "ssl", 00:18:26.701 "recv_buf_size": 4096, 00:18:26.701 "send_buf_size": 4096, 00:18:26.701 "enable_recv_pipe": true, 00:18:26.701 "enable_quickack": false, 00:18:26.701 "enable_placement_id": 0, 00:18:26.701 "enable_zerocopy_send_server": true, 00:18:26.701 "enable_zerocopy_send_client": false, 00:18:26.701 "zerocopy_threshold": 0, 00:18:26.701 "tls_version": 0, 00:18:26.701 "enable_ktls": false 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "sock_impl_set_options", 00:18:26.701 "params": { 00:18:26.701 "impl_name": "posix", 00:18:26.701 "recv_buf_size": 2097152, 00:18:26.701 "send_buf_size": 2097152, 00:18:26.701 "enable_recv_pipe": true, 00:18:26.701 "enable_quickack": false, 00:18:26.701 "enable_placement_id": 0, 00:18:26.701 "enable_zerocopy_send_server": true, 00:18:26.701 "enable_zerocopy_send_client": false, 00:18:26.701 "zerocopy_threshold": 0, 00:18:26.701 "tls_version": 0, 00:18:26.701 "enable_ktls": false 00:18:26.701 } 00:18:26.701 } 00:18:26.701 ] 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "subsystem": "vmd", 00:18:26.701 "config": [] 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "subsystem": "accel", 00:18:26.701 "config": [ 00:18:26.701 { 00:18:26.701 "method": "accel_set_options", 00:18:26.701 "params": { 00:18:26.701 "small_cache_size": 128, 00:18:26.701 "large_cache_size": 16, 00:18:26.701 "task_count": 2048, 00:18:26.701 "sequence_count": 2048, 00:18:26.701 "buf_count": 2048 00:18:26.701 } 00:18:26.701 } 00:18:26.701 ] 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "subsystem": "bdev", 00:18:26.701 "config": [ 00:18:26.701 { 00:18:26.701 "method": "bdev_set_options", 00:18:26.701 "params": { 00:18:26.701 "bdev_io_pool_size": 65535, 00:18:26.701 "bdev_io_cache_size": 256, 00:18:26.701 "bdev_auto_examine": true, 00:18:26.701 "iobuf_small_cache_size": 128, 00:18:26.701 "iobuf_large_cache_size": 16 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_raid_set_options", 00:18:26.701 "params": { 00:18:26.701 "process_window_size_kb": 1024, 00:18:26.701 "process_max_bandwidth_mb_sec": 0 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_iscsi_set_options", 00:18:26.701 "params": { 00:18:26.701 "timeout_sec": 30 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_nvme_set_options", 00:18:26.701 "params": { 00:18:26.701 "action_on_timeout": "none", 00:18:26.701 "timeout_us": 0, 00:18:26.701 "timeout_admin_us": 0, 00:18:26.701 "keep_alive_timeout_ms": 10000, 00:18:26.701 "arbitration_burst": 0, 00:18:26.701 "low_priority_weight": 0, 00:18:26.701 "medium_priority_weight": 0, 00:18:26.701 "high_priority_weight": 0, 00:18:26.701 "nvme_adminq_poll_period_us": 10000, 00:18:26.701 "nvme_ioq_poll_period_us": 0, 00:18:26.701 "io_queue_requests": 512, 00:18:26.701 "delay_cmd_submit": true, 00:18:26.701 "transport_retry_count": 4, 00:18:26.701 "bdev_retry_count": 3, 00:18:26.701 "transport_ack_timeout": 0, 00:18:26.701 "ctrlr_loss_timeout_sec": 0, 00:18:26.701 "reconnect_delay_sec": 0, 00:18:26.701 "fast_io_fail_timeout_sec": 0, 00:18:26.701 "disable_auto_failback": false, 00:18:26.701 "generate_uuids": false, 00:18:26.701 "transport_tos": 0, 00:18:26.701 "nvme_error_stat": false, 00:18:26.701 "rdma_srq_size": 0, 00:18:26.701 "io_path_stat": false, 00:18:26.701 "allow_accel_sequence": false, 00:18:26.701 "rdma_max_cq_size": 0, 00:18:26.701 "rdma_cm_event_timeout_ms": 0, 00:18:26.701 "dhchap_digests": [ 00:18:26.701 "sha256", 00:18:26.701 "sha384", 00:18:26.701 "sha512" 00:18:26.701 ], 00:18:26.701 "dhchap_dhgroups": [ 00:18:26.701 "null", 00:18:26.701 "ffdhe2048", 00:18:26.701 "ffdhe3072", 00:18:26.701 "ffdhe4096", 00:18:26.701 "ffdhe6144", 00:18:26.701 "ffdhe8192" 00:18:26.701 ] 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_nvme_attach_controller", 00:18:26.701 "params": { 00:18:26.701 "name": "nvme0", 00:18:26.701 "trtype": "TCP", 00:18:26.701 "adrfam": "IPv4", 00:18:26.701 "traddr": "10.0.0.2", 00:18:26.701 "trsvcid": "4420", 00:18:26.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.701 "prchk_reftag": false, 00:18:26.701 "prchk_guard": false, 00:18:26.701 "ctrlr_loss_timeout_sec": 0, 00:18:26.701 "reconnect_delay_sec": 0, 00:18:26.701 "fast_io_fail_timeout_sec": 0, 00:18:26.701 "psk": "key0", 00:18:26.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.701 "hdgst": false, 00:18:26.701 "ddgst": false 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_nvme_set_hotplug", 00:18:26.701 "params": { 00:18:26.701 "period_us": 100000, 00:18:26.701 "enable": false 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_enable_histogram", 00:18:26.701 "params": { 00:18:26.701 "name": "nvme0n1", 00:18:26.701 "enable": true 00:18:26.701 } 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "method": "bdev_wait_for_examine" 00:18:26.701 } 00:18:26.701 ] 00:18:26.701 }, 00:18:26.701 { 00:18:26.701 "subsystem": "nbd", 00:18:26.701 "config": [] 00:18:26.701 } 00:18:26.701 ] 00:18:26.701 }' 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1523991 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1523991 ']' 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1523991 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523991 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523991' 00:18:26.701 killing process with pid 1523991 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1523991 00:18:26.701 Received shutdown signal, test time was about 1.000000 seconds 00:18:26.701 00:18:26.701 Latency(us) 00:18:26.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.701 =================================================================================================================== 00:18:26.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.701 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1523991 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1523958 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1523958 ']' 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1523958 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523958 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523958' 00:18:26.960 killing process with pid 1523958 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1523958 00:18:26.960 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1523958 00:18:27.220 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:27.220 "subsystems": [ 00:18:27.220 { 00:18:27.220 "subsystem": "keyring", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "keyring_file_add_key", 00:18:27.220 "params": { 00:18:27.220 "name": "key0", 00:18:27.220 "path": "/tmp/tmp.hps9t6l2BG" 00:18:27.220 } 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "iobuf", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "iobuf_set_options", 00:18:27.220 "params": { 00:18:27.220 "small_pool_count": 8192, 00:18:27.220 "large_pool_count": 1024, 00:18:27.220 "small_bufsize": 8192, 00:18:27.220 "large_bufsize": 135168 00:18:27.220 } 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "sock", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "sock_set_default_impl", 00:18:27.220 "params": { 00:18:27.220 "impl_name": "posix" 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "sock_impl_set_options", 00:18:27.220 "params": { 00:18:27.220 "impl_name": "ssl", 00:18:27.220 "recv_buf_size": 4096, 00:18:27.220 "send_buf_size": 4096, 00:18:27.220 "enable_recv_pipe": true, 00:18:27.220 "enable_quickack": false, 00:18:27.220 "enable_placement_id": 0, 00:18:27.220 "enable_zerocopy_send_server": true, 00:18:27.220 "enable_zerocopy_send_client": false, 00:18:27.220 "zerocopy_threshold": 0, 00:18:27.220 "tls_version": 0, 00:18:27.220 "enable_ktls": false 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "sock_impl_set_options", 00:18:27.220 "params": { 00:18:27.220 "impl_name": "posix", 00:18:27.220 "recv_buf_size": 2097152, 00:18:27.220 "send_buf_size": 2097152, 00:18:27.220 "enable_recv_pipe": true, 00:18:27.220 "enable_quickack": false, 00:18:27.220 "enable_placement_id": 0, 00:18:27.220 "enable_zerocopy_send_server": true, 00:18:27.220 "enable_zerocopy_send_client": false, 00:18:27.220 "zerocopy_threshold": 0, 00:18:27.220 "tls_version": 0, 00:18:27.220 "enable_ktls": false 00:18:27.220 } 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "vmd", 00:18:27.220 "config": [] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "accel", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "accel_set_options", 00:18:27.220 "params": { 00:18:27.220 "small_cache_size": 128, 00:18:27.220 "large_cache_size": 16, 00:18:27.220 "task_count": 2048, 00:18:27.220 "sequence_count": 2048, 00:18:27.220 "buf_count": 2048 00:18:27.220 } 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "bdev", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "bdev_set_options", 00:18:27.220 "params": { 00:18:27.220 "bdev_io_pool_size": 65535, 00:18:27.220 "bdev_io_cache_size": 256, 00:18:27.220 "bdev_auto_examine": true, 00:18:27.220 "iobuf_small_cache_size": 128, 00:18:27.220 "iobuf_large_cache_size": 16 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_raid_set_options", 00:18:27.220 "params": { 00:18:27.220 "process_window_size_kb": 1024, 00:18:27.220 "process_max_bandwidth_mb_sec": 0 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_iscsi_set_options", 00:18:27.220 "params": { 00:18:27.220 "timeout_sec": 30 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_nvme_set_options", 00:18:27.220 "params": { 00:18:27.220 "action_on_timeout": "none", 00:18:27.220 "timeout_us": 0, 00:18:27.220 "timeout_admin_us": 0, 00:18:27.220 "keep_alive_timeout_ms": 10000, 00:18:27.220 "arbitration_burst": 0, 00:18:27.220 "low_priority_weight": 0, 00:18:27.220 "medium_priority_weight": 0, 00:18:27.220 "high_priority_weight": 0, 00:18:27.220 "nvme_adminq_poll_period_us": 10000, 00:18:27.220 "nvme_ioq_poll_period_us": 0, 00:18:27.220 "io_queue_requests": 0, 00:18:27.220 "delay_cmd_submit": true, 00:18:27.220 "transport_retry_count": 4, 00:18:27.220 "bdev_retry_count": 3, 00:18:27.220 "transport_ack_timeout": 0, 00:18:27.220 "ctrlr_loss_timeout_sec": 0, 00:18:27.220 "reconnect_delay_sec": 0, 00:18:27.220 "fast_io_fail_timeout_sec": 0, 00:18:27.220 "disable_auto_failback": false, 00:18:27.220 "generate_uuids": false, 00:18:27.220 "transport_tos": 0, 00:18:27.220 "nvme_error_stat": false, 00:18:27.220 "rdma_srq_size": 0, 00:18:27.220 "io_path_stat": false, 00:18:27.220 "allow_accel_sequence": false, 00:18:27.220 "rdma_max_cq_size": 0, 00:18:27.220 "rdma_cm_event_timeout_ms": 0, 00:18:27.220 "dhchap_digests": [ 00:18:27.220 "sha256", 00:18:27.220 "sha384", 00:18:27.220 "sha512" 00:18:27.220 ], 00:18:27.220 "dhchap_dhgroups": [ 00:18:27.220 "null", 00:18:27.220 "ffdhe2048", 00:18:27.220 "ffdhe3072", 00:18:27.220 "ffdhe4096", 00:18:27.220 "ffdhe6144", 00:18:27.220 "ffdhe8192" 00:18:27.220 ] 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_nvme_set_hotplug", 00:18:27.220 "params": { 00:18:27.220 "period_us": 100000, 00:18:27.220 "enable": false 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_malloc_create", 00:18:27.220 "params": { 00:18:27.220 "name": "malloc0", 00:18:27.220 "num_blocks": 8192, 00:18:27.220 "block_size": 4096, 00:18:27.220 "physical_block_size": 4096, 00:18:27.220 "uuid": "7942c196-dcc6-491e-ab7d-580a7d043ffc", 00:18:27.220 "optimal_io_boundary": 0, 00:18:27.220 "md_size": 0, 00:18:27.220 "dif_type": 0, 00:18:27.220 "dif_is_head_of_md": false, 00:18:27.220 "dif_pi_format": 0 00:18:27.220 } 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "method": "bdev_wait_for_examine" 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "nbd", 00:18:27.220 "config": [] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "scheduler", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "framework_set_scheduler", 00:18:27.220 "params": { 00:18:27.220 "name": "static" 00:18:27.220 } 00:18:27.220 } 00:18:27.220 ] 00:18:27.220 }, 00:18:27.220 { 00:18:27.220 "subsystem": "nvmf", 00:18:27.220 "config": [ 00:18:27.220 { 00:18:27.220 "method": "nvmf_set_config", 00:18:27.220 "params": { 00:18:27.220 "discovery_filter": "match_any", 00:18:27.221 "admin_cmd_passthru": { 00:18:27.221 "identify_ctrlr": false 00:18:27.221 } 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_set_max_subsystems", 00:18:27.221 "params": { 00:18:27.221 "max_subsystems": 1024 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_set_crdt", 00:18:27.221 "params": { 00:18:27.221 "crdt1": 0, 00:18:27.221 "crdt2": 0, 00:18:27.221 "crdt3": 0 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_create_transport", 00:18:27.221 "params": { 00:18:27.221 "trtype": "TCP", 00:18:27.221 "max_queue_depth": 128, 00:18:27.221 "max_io_qpairs_per_ctrlr": 127, 00:18:27.221 "in_capsule_data_size": 4096, 00:18:27.221 "max_io_size": 131072, 00:18:27.221 "io_unit_size": 131072, 00:18:27.221 "max_aq_depth": 128, 00:18:27.221 "num_shared_buffers": 511, 00:18:27.221 "buf_cache_size": 4294967295, 00:18:27.221 "dif_insert_or_strip": false, 00:18:27.221 "zcopy": false, 00:18:27.221 "c2h_success": false, 00:18:27.221 "sock_priority": 0, 00:18:27.221 "abort_timeout_sec": 1, 00:18:27.221 "ack_timeout": 0, 00:18:27.221 "data_wr_pool_size": 0 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_create_subsystem", 00:18:27.221 "params": { 00:18:27.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.221 "allow_any_host": false, 00:18:27.221 "serial_number": "00000000000000000000", 00:18:27.221 "model_number": "SPDK bdev Controller", 00:18:27.221 "max_namespaces": 32, 00:18:27.221 "min_cntlid": 1, 00:18:27.221 "max_cntlid": 65519, 00:18:27.221 "ana_reporting": false 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_subsystem_add_host", 00:18:27.221 "params": { 00:18:27.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.221 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.221 "psk": "key0" 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_subsystem_add_ns", 00:18:27.221 "params": { 00:18:27.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.221 "namespace": { 00:18:27.221 "nsid": 1, 00:18:27.221 "bdev_name": "malloc0", 00:18:27.221 "nguid": "7942C196DCC6491EAB7D580A7D043FFC", 00:18:27.221 "uuid": "7942c196-dcc6-491e-ab7d-580a7d043ffc", 00:18:27.221 "no_auto_visible": false 00:18:27.221 } 00:18:27.221 } 00:18:27.221 }, 00:18:27.221 { 00:18:27.221 "method": "nvmf_subsystem_add_listener", 00:18:27.221 "params": { 00:18:27.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.221 "listen_address": { 00:18:27.221 "trtype": "TCP", 00:18:27.221 "adrfam": "IPv4", 00:18:27.221 "traddr": "10.0.0.2", 00:18:27.221 "trsvcid": "4420" 00:18:27.221 }, 00:18:27.221 "secure_channel": false, 00:18:27.221 "sock_impl": "ssl" 00:18:27.221 } 00:18:27.221 } 00:18:27.221 ] 00:18:27.221 } 00:18:27.221 ] 00:18:27.221 }' 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1524300 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1524300 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1524300 ']' 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.221 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.221 [2024-07-25 10:25:16.797848] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:27.221 [2024-07-25 10:25:16.797943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.221 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.221 [2024-07-25 10:25:16.864101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.221 [2024-07-25 10:25:16.982427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.221 [2024-07-25 10:25:16.982504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.221 [2024-07-25 10:25:16.982525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.221 [2024-07-25 10:25:16.982540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.221 [2024-07-25 10:25:16.982552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.221 [2024-07-25 10:25:16.982644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.480 [2024-07-25 10:25:17.213182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.480 [2024-07-25 10:25:17.255262] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.480 [2024-07-25 10:25:17.255527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1524419 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1524419 /var/tmp/bdevperf.sock 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1524419 ']' 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.415 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:28.415 "subsystems": [ 00:18:28.415 { 00:18:28.415 "subsystem": "keyring", 00:18:28.415 "config": [ 00:18:28.415 { 00:18:28.415 "method": "keyring_file_add_key", 00:18:28.415 "params": { 00:18:28.415 "name": "key0", 00:18:28.415 "path": "/tmp/tmp.hps9t6l2BG" 00:18:28.415 } 00:18:28.415 } 00:18:28.415 ] 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "subsystem": "iobuf", 00:18:28.415 "config": [ 00:18:28.415 { 00:18:28.415 "method": "iobuf_set_options", 00:18:28.415 "params": { 00:18:28.415 "small_pool_count": 8192, 00:18:28.415 "large_pool_count": 1024, 00:18:28.415 "small_bufsize": 8192, 00:18:28.415 "large_bufsize": 135168 00:18:28.415 } 00:18:28.415 } 00:18:28.415 ] 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "subsystem": "sock", 00:18:28.415 "config": [ 00:18:28.415 { 00:18:28.415 "method": "sock_set_default_impl", 00:18:28.415 "params": { 00:18:28.415 "impl_name": "posix" 00:18:28.415 } 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "method": "sock_impl_set_options", 00:18:28.415 "params": { 00:18:28.415 "impl_name": "ssl", 00:18:28.415 "recv_buf_size": 4096, 00:18:28.415 "send_buf_size": 4096, 00:18:28.415 "enable_recv_pipe": true, 00:18:28.415 "enable_quickack": false, 00:18:28.415 "enable_placement_id": 0, 00:18:28.415 "enable_zerocopy_send_server": true, 00:18:28.415 "enable_zerocopy_send_client": false, 00:18:28.415 "zerocopy_threshold": 0, 00:18:28.415 "tls_version": 0, 00:18:28.415 "enable_ktls": false 00:18:28.415 } 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "method": "sock_impl_set_options", 00:18:28.415 "params": { 00:18:28.415 "impl_name": "posix", 00:18:28.415 "recv_buf_size": 2097152, 00:18:28.415 "send_buf_size": 2097152, 00:18:28.415 "enable_recv_pipe": true, 00:18:28.415 "enable_quickack": false, 00:18:28.415 "enable_placement_id": 0, 00:18:28.415 "enable_zerocopy_send_server": true, 00:18:28.415 "enable_zerocopy_send_client": false, 00:18:28.415 "zerocopy_threshold": 0, 00:18:28.415 "tls_version": 0, 00:18:28.415 "enable_ktls": false 00:18:28.415 } 00:18:28.415 } 00:18:28.415 ] 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "subsystem": "vmd", 00:18:28.415 "config": [] 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "subsystem": "accel", 00:18:28.415 "config": [ 00:18:28.415 { 00:18:28.415 "method": "accel_set_options", 00:18:28.415 "params": { 00:18:28.415 "small_cache_size": 128, 00:18:28.415 "large_cache_size": 16, 00:18:28.415 "task_count": 2048, 00:18:28.415 "sequence_count": 2048, 00:18:28.415 "buf_count": 2048 00:18:28.415 } 00:18:28.415 } 00:18:28.415 ] 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "subsystem": "bdev", 00:18:28.415 "config": [ 00:18:28.415 { 00:18:28.415 "method": "bdev_set_options", 00:18:28.415 "params": { 00:18:28.415 "bdev_io_pool_size": 65535, 00:18:28.415 "bdev_io_cache_size": 256, 00:18:28.415 "bdev_auto_examine": true, 00:18:28.415 "iobuf_small_cache_size": 128, 00:18:28.415 "iobuf_large_cache_size": 16 00:18:28.415 } 00:18:28.415 }, 00:18:28.415 { 00:18:28.415 "method": "bdev_raid_set_options", 00:18:28.415 "params": { 00:18:28.415 "process_window_size_kb": 1024, 00:18:28.416 "process_max_bandwidth_mb_sec": 0 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_iscsi_set_options", 00:18:28.416 "params": { 00:18:28.416 "timeout_sec": 30 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_nvme_set_options", 00:18:28.416 "params": { 00:18:28.416 "action_on_timeout": "none", 00:18:28.416 "timeout_us": 0, 00:18:28.416 "timeout_admin_us": 0, 00:18:28.416 "keep_alive_timeout_ms": 10000, 00:18:28.416 "arbitration_burst": 0, 00:18:28.416 "low_priority_weight": 0, 00:18:28.416 "medium_priority_weight": 0, 00:18:28.416 "high_priority_weight": 0, 00:18:28.416 "nvme_adminq_poll_period_us": 10000, 00:18:28.416 "nvme_ioq_poll_period_us": 0, 00:18:28.416 "io_queue_requests": 512, 00:18:28.416 "delay_cmd_submit": true, 00:18:28.416 "transport_retry_count": 4, 00:18:28.416 "bdev_retry_count": 3, 00:18:28.416 "transport_ack_timeout": 0, 00:18:28.416 "ctrlr_loss_timeout_sec": 0, 00:18:28.416 "reconnect_delay_sec": 0, 00:18:28.416 "fast_io_fail_timeout_sec": 0, 00:18:28.416 "disable_auto_failback": false, 00:18:28.416 "generate_uuids": false, 00:18:28.416 "transport_tos": 0, 00:18:28.416 "nvme_error_stat": false, 00:18:28.416 "rdma_srq_size": 0, 00:18:28.416 "io_path_stat": false, 00:18:28.416 "allow_accel_sequence": false, 00:18:28.416 "rdma_max_cq_size": 0, 00:18:28.416 "rdma_cm_event_timeout_ms": 0, 00:18:28.416 "dhchap_digests": [ 00:18:28.416 "sha256", 00:18:28.416 "sha384", 00:18:28.416 "sha512" 00:18:28.416 ], 00:18:28.416 "dhchap_dhgroups": [ 00:18:28.416 "null", 00:18:28.416 "ffdhe2048", 00:18:28.416 "ffdhe3072", 00:18:28.416 "ffdhe4096", 00:18:28.416 "ffdhe6144", 00:18:28.416 "ffdhe8192" 00:18:28.416 ] 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_nvme_attach_controller", 00:18:28.416 "params": { 00:18:28.416 "name": "nvme0", 00:18:28.416 "trtype": "TCP", 00:18:28.416 "adrfam": "IPv4", 00:18:28.416 "traddr": "10.0.0.2", 00:18:28.416 "trsvcid": "4420", 00:18:28.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.416 "prchk_reftag": false, 00:18:28.416 "prchk_guard": false, 00:18:28.416 "ctrlr_loss_timeout_sec": 0, 00:18:28.416 "reconnect_delay_sec": 0, 00:18:28.416 "fast_io_fail_timeout_sec": 0, 00:18:28.416 "psk": "key0", 00:18:28.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.416 "hdgst": false, 00:18:28.416 "ddgst": false 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_nvme_set_hotplug", 00:18:28.416 "params": { 00:18:28.416 "period_us": 100000, 00:18:28.416 "enable": false 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_enable_histogram", 00:18:28.416 "params": { 00:18:28.416 "name": "nvme0n1", 00:18:28.416 "enable": true 00:18:28.416 } 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "method": "bdev_wait_for_examine" 00:18:28.416 } 00:18:28.416 ] 00:18:28.416 }, 00:18:28.416 { 00:18:28.416 "subsystem": "nbd", 00:18:28.416 "config": [] 00:18:28.416 } 00:18:28.416 ] 00:18:28.416 }' 00:18:28.416 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.416 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.416 [2024-07-25 10:25:17.911571] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:28.416 [2024-07-25 10:25:17.911672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524419 ] 00:18:28.416 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.416 [2024-07-25 10:25:17.973008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.416 [2024-07-25 10:25:18.089912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.675 [2024-07-25 10:25:18.256180] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.240 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:29.240 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:29.240 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.240 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:29.498 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.498 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.756 Running I/O for 1 seconds... 00:18:30.690 00:18:30.690 Latency(us) 00:18:30.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.690 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.690 Verification LBA range: start 0x0 length 0x2000 00:18:30.690 nvme0n1 : 1.02 3191.95 12.47 0.00 0.00 39659.13 7184.69 37671.06 00:18:30.690 =================================================================================================================== 00:18:30.690 Total : 3191.95 12.47 0.00 0.00 39659.13 7184.69 37671.06 00:18:30.690 0 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:30.691 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:30.691 nvmf_trace.0 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1524419 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1524419 ']' 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1524419 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1524419 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1524419' 00:18:30.949 killing process with pid 1524419 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1524419 00:18:30.949 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.949 00:18:30.949 Latency(us) 00:18:30.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.949 =================================================================================================================== 00:18:30.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1524419 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.949 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.208 rmmod nvme_tcp 00:18:31.208 rmmod nvme_fabrics 00:18:31.208 rmmod nvme_keyring 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1524300 ']' 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1524300 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1524300 ']' 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1524300 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1524300 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1524300' 00:18:31.208 killing process with pid 1524300 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1524300 00:18:31.208 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1524300 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.468 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.a6gS4wWXf5 /tmp/tmp.NOf2I28TKq /tmp/tmp.hps9t6l2BG 00:18:33.373 00:18:33.373 real 1m19.186s 00:18:33.373 user 2m11.378s 00:18:33.373 sys 0m24.246s 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.373 ************************************ 00:18:33.373 END TEST nvmf_tls 00:18:33.373 ************************************ 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.373 ************************************ 00:18:33.373 START TEST nvmf_fips 00:18:33.373 ************************************ 00:18:33.373 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:33.632 * Looking for test storage... 00:18:33.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:33.632 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:33.633 Error setting digest 00:18:33.633 00720FF3067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:33.633 00720FF3067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.633 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:35.536 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:35.536 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:35.536 Found net devices under 0000:08:00.0: cvl_0_0 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:35.536 Found net devices under 0000:08:00.1: cvl_0_1 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.536 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.536 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.536 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.536 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.536 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:18:35.537 00:18:35.537 --- 10.0.0.2 ping statistics --- 00:18:35.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.537 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:35.537 00:18:35.537 --- 10.0.0.1 ping statistics --- 00:18:35.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.537 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1526239 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1526239 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1526239 ']' 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.537 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:35.537 [2024-07-25 10:25:25.199122] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:35.537 [2024-07-25 10:25:25.199224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.537 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.537 [2024-07-25 10:25:25.267066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.795 [2024-07-25 10:25:25.382437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.795 [2024-07-25 10:25:25.382514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.795 [2024-07-25 10:25:25.382531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.795 [2024-07-25 10:25:25.382545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.795 [2024-07-25 10:25:25.382556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.795 [2024-07-25 10:25:25.382596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.795 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.053 [2024-07-25 10:25:25.781102] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.053 [2024-07-25 10:25:25.797087] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.053 [2024-07-25 10:25:25.797309] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.053 [2024-07-25 10:25:25.826476] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:36.310 malloc0 00:18:36.310 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.310 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1526280 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1526280 /var/tmp/bdevperf.sock 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1526280 ']' 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.311 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:36.311 [2024-07-25 10:25:25.929196] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:18:36.311 [2024-07-25 10:25:25.929287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526280 ] 00:18:36.311 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.311 [2024-07-25 10:25:25.986096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.567 [2024-07-25 10:25:26.091542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.131 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.131 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:37.131 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:37.389 [2024-07-25 10:25:27.157566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.389 [2024-07-25 10:25:27.157681] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:37.651 TLSTESTn1 00:18:37.651 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.651 Running I/O for 10 seconds... 00:18:47.670 00:18:47.670 Latency(us) 00:18:47.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.670 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.670 Verification LBA range: start 0x0 length 0x2000 00:18:47.670 TLSTESTn1 : 10.03 3259.84 12.73 0.00 0.00 39192.33 6359.42 55147.33 00:18:47.670 =================================================================================================================== 00:18:47.670 Total : 3259.84 12.73 0.00 0.00 39192.33 6359.42 55147.33 00:18:47.670 0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:47.930 nvmf_trace.0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1526280 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1526280 ']' 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1526280 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526280 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526280' 00:18:47.930 killing process with pid 1526280 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1526280 00:18:47.930 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.930 00:18:47.930 Latency(us) 00:18:47.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.930 =================================================================================================================== 00:18:47.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.930 [2024-07-25 10:25:37.541631] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:47.930 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1526280 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.189 rmmod nvme_tcp 00:18:48.189 rmmod nvme_fabrics 00:18:48.189 rmmod nvme_keyring 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1526239 ']' 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1526239 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1526239 ']' 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1526239 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526239 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526239' 00:18:48.189 killing process with pid 1526239 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1526239 00:18:48.189 [2024-07-25 10:25:37.871704] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:48.189 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1526239 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.449 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.355 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.356 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.356 00:18:50.356 real 0m16.980s 00:18:50.356 user 0m22.467s 00:18:50.356 sys 0m5.755s 00:18:50.356 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.356 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.356 ************************************ 00:18:50.356 END TEST nvmf_fips 00:18:50.356 ************************************ 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.614 10:25:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:51.993 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.993 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:51.994 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:51.994 Found net devices under 0000:08:00.0: cvl_0_0 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:51.994 Found net devices under 0000:08:00.1: cvl_0_1 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.994 10:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.252 ************************************ 00:18:52.252 START TEST nvmf_perf_adq 00:18:52.252 ************************************ 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:52.252 * Looking for test storage... 00:18:52.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.252 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.253 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:53.630 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:53.630 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.630 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:53.889 Found net devices under 0000:08:00.0: cvl_0_0 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:53.889 Found net devices under 0000:08:00.1: cvl_0_1 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:53.889 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:54.456 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:56.358 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.637 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:01.638 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:01.638 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:01.638 Found net devices under 0000:08:00.0: cvl_0_0 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:01.638 Found net devices under 0000:08:00.1: cvl_0_1 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:19:01.638 00:19:01.638 --- 10.0.0.2 ping statistics --- 00:19:01.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.638 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:01.638 00:19:01.638 --- 10.0.0.1 ping statistics --- 00:19:01.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.638 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.638 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:01.639 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.639 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.639 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1530762 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1530762 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1530762 ']' 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 [2024-07-25 10:25:51.054833] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:01.639 [2024-07-25 10:25:51.054933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.639 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.639 [2024-07-25 10:25:51.125667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.639 [2024-07-25 10:25:51.246968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.639 [2024-07-25 10:25:51.247031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.639 [2024-07-25 10:25:51.247047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.639 [2024-07-25 10:25:51.247060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.639 [2024-07-25 10:25:51.247072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.639 [2024-07-25 10:25:51.247152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.639 [2024-07-25 10:25:51.247228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.639 [2024-07-25 10:25:51.247177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.639 [2024-07-25 10:25:51.247231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.639 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 [2024-07-25 10:25:51.499596] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 Malloc1 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.899 [2024-07-25 10:25:51.548630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1530886 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.899 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:01.899 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.800 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:03.800 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.800 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:04.059 "tick_rate": 2700000000, 00:19:04.059 "poll_groups": [ 00:19:04.059 { 00:19:04.059 "name": "nvmf_tgt_poll_group_000", 00:19:04.059 "admin_qpairs": 1, 00:19:04.059 "io_qpairs": 1, 00:19:04.059 "current_admin_qpairs": 1, 00:19:04.059 "current_io_qpairs": 1, 00:19:04.059 "pending_bdev_io": 0, 00:19:04.059 "completed_nvme_io": 17564, 00:19:04.059 "transports": [ 00:19:04.059 { 00:19:04.059 "trtype": "TCP" 00:19:04.059 } 00:19:04.059 ] 00:19:04.059 }, 00:19:04.059 { 00:19:04.059 "name": "nvmf_tgt_poll_group_001", 00:19:04.059 "admin_qpairs": 0, 00:19:04.059 "io_qpairs": 1, 00:19:04.059 "current_admin_qpairs": 0, 00:19:04.059 "current_io_qpairs": 1, 00:19:04.059 "pending_bdev_io": 0, 00:19:04.059 "completed_nvme_io": 18130, 00:19:04.059 "transports": [ 00:19:04.059 { 00:19:04.059 "trtype": "TCP" 00:19:04.059 } 00:19:04.059 ] 00:19:04.059 }, 00:19:04.059 { 00:19:04.059 "name": "nvmf_tgt_poll_group_002", 00:19:04.059 "admin_qpairs": 0, 00:19:04.059 "io_qpairs": 1, 00:19:04.059 "current_admin_qpairs": 0, 00:19:04.059 "current_io_qpairs": 1, 00:19:04.059 "pending_bdev_io": 0, 00:19:04.059 "completed_nvme_io": 19845, 00:19:04.059 "transports": [ 00:19:04.059 { 00:19:04.059 "trtype": "TCP" 00:19:04.059 } 00:19:04.059 ] 00:19:04.059 }, 00:19:04.059 { 00:19:04.059 "name": "nvmf_tgt_poll_group_003", 00:19:04.059 "admin_qpairs": 0, 00:19:04.059 "io_qpairs": 1, 00:19:04.059 "current_admin_qpairs": 0, 00:19:04.059 "current_io_qpairs": 1, 00:19:04.059 "pending_bdev_io": 0, 00:19:04.059 "completed_nvme_io": 17925, 00:19:04.059 "transports": [ 00:19:04.059 { 00:19:04.059 "trtype": "TCP" 00:19:04.059 } 00:19:04.059 ] 00:19:04.059 } 00:19:04.059 ] 00:19:04.059 }' 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:04.059 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1530886 00:19:12.171 Initializing NVMe Controllers 00:19:12.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:12.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:12.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:12.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:12.171 Initialization complete. Launching workers. 00:19:12.171 ======================================================== 00:19:12.171 Latency(us) 00:19:12.171 Device Information : IOPS MiB/s Average min max 00:19:12.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9386.87 36.67 6817.57 3136.77 9980.24 00:19:12.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9556.76 37.33 6698.58 4771.44 8336.48 00:19:12.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10456.15 40.84 6122.53 3619.10 8232.79 00:19:12.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9277.27 36.24 6898.41 2750.89 10727.24 00:19:12.171 ======================================================== 00:19:12.171 Total : 38677.05 151.08 6619.66 2750.89 10727.24 00:19:12.171 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.171 rmmod nvme_tcp 00:19:12.171 rmmod nvme_fabrics 00:19:12.171 rmmod nvme_keyring 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1530762 ']' 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1530762 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1530762 ']' 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1530762 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530762 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530762' 00:19:12.171 killing process with pid 1530762 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1530762 00:19:12.171 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1530762 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.429 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.333 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.333 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:14.333 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:14.898 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:16.801 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.080 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:22.081 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:22.081 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:22.081 Found net devices under 0000:08:00.0: cvl_0_0 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:22.081 Found net devices under 0000:08:00.1: cvl_0_1 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:19:22.081 00:19:22.081 --- 10.0.0.2 ping statistics --- 00:19:22.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.081 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:22.081 00:19:22.081 --- 10.0.0.1 ping statistics --- 00:19:22.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.081 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.081 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:22.082 net.core.busy_poll = 1 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:22.082 net.core.busy_read = 1 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1532889 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1532889 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1532889 ']' 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.082 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.082 [2024-07-25 10:26:11.770977] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:22.082 [2024-07-25 10:26:11.771075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.082 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.082 [2024-07-25 10:26:11.837771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.341 [2024-07-25 10:26:11.955174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.341 [2024-07-25 10:26:11.955238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.341 [2024-07-25 10:26:11.955254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.341 [2024-07-25 10:26:11.955268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.341 [2024-07-25 10:26:11.955280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.341 [2024-07-25 10:26:11.955366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.341 [2024-07-25 10:26:11.955418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.341 [2024-07-25 10:26:11.955470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.341 [2024-07-25 10:26:11.955473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.600 [2024-07-25 10:26:12.194676] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.600 Malloc1 00:19:22.600 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.601 [2024-07-25 10:26:12.244008] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1532919 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:22.601 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:22.601 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:24.505 "tick_rate": 2700000000, 00:19:24.505 "poll_groups": [ 00:19:24.505 { 00:19:24.505 "name": "nvmf_tgt_poll_group_000", 00:19:24.505 "admin_qpairs": 1, 00:19:24.505 "io_qpairs": 3, 00:19:24.505 "current_admin_qpairs": 1, 00:19:24.505 "current_io_qpairs": 3, 00:19:24.505 "pending_bdev_io": 0, 00:19:24.505 "completed_nvme_io": 23824, 00:19:24.505 "transports": [ 00:19:24.505 { 00:19:24.505 "trtype": "TCP" 00:19:24.505 } 00:19:24.505 ] 00:19:24.505 }, 00:19:24.505 { 00:19:24.505 "name": "nvmf_tgt_poll_group_001", 00:19:24.505 "admin_qpairs": 0, 00:19:24.505 "io_qpairs": 1, 00:19:24.505 "current_admin_qpairs": 0, 00:19:24.505 "current_io_qpairs": 1, 00:19:24.505 "pending_bdev_io": 0, 00:19:24.505 "completed_nvme_io": 22910, 00:19:24.505 "transports": [ 00:19:24.505 { 00:19:24.505 "trtype": "TCP" 00:19:24.505 } 00:19:24.505 ] 00:19:24.505 }, 00:19:24.505 { 00:19:24.505 "name": "nvmf_tgt_poll_group_002", 00:19:24.505 "admin_qpairs": 0, 00:19:24.505 "io_qpairs": 0, 00:19:24.505 "current_admin_qpairs": 0, 00:19:24.505 "current_io_qpairs": 0, 00:19:24.505 "pending_bdev_io": 0, 00:19:24.505 "completed_nvme_io": 0, 00:19:24.505 "transports": [ 00:19:24.505 { 00:19:24.505 "trtype": "TCP" 00:19:24.505 } 00:19:24.505 ] 00:19:24.505 }, 00:19:24.505 { 00:19:24.505 "name": "nvmf_tgt_poll_group_003", 00:19:24.505 "admin_qpairs": 0, 00:19:24.505 "io_qpairs": 0, 00:19:24.505 "current_admin_qpairs": 0, 00:19:24.505 "current_io_qpairs": 0, 00:19:24.505 "pending_bdev_io": 0, 00:19:24.505 "completed_nvme_io": 0, 00:19:24.505 "transports": [ 00:19:24.505 { 00:19:24.505 "trtype": "TCP" 00:19:24.505 } 00:19:24.505 ] 00:19:24.505 } 00:19:24.505 ] 00:19:24.505 }' 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:24.505 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:24.763 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:24.763 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:24.763 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1532919 00:19:32.925 Initializing NVMe Controllers 00:19:32.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:32.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:32.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:32.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:32.925 Initialization complete. Launching workers. 00:19:32.925 ======================================================== 00:19:32.925 Latency(us) 00:19:32.925 Device Information : IOPS MiB/s Average min max 00:19:32.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12077.58 47.18 5315.20 2031.99 47361.37 00:19:32.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4362.46 17.04 14675.96 2291.04 63536.61 00:19:32.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4013.56 15.68 15953.66 2396.49 66355.13 00:19:32.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4210.16 16.45 15233.87 2110.08 63904.19 00:19:32.925 ======================================================== 00:19:32.925 Total : 24663.76 96.34 10395.25 2031.99 66355.13 00:19:32.925 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:32.925 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.926 rmmod nvme_tcp 00:19:32.926 rmmod nvme_fabrics 00:19:32.926 rmmod nvme_keyring 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1532889 ']' 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1532889 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1532889 ']' 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1532889 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1532889 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1532889' 00:19:32.926 killing process with pid 1532889 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1532889 00:19:32.926 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1532889 00:19:33.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.187 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.187 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.187 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.187 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:35.095 00:19:35.095 real 0m43.004s 00:19:35.095 user 2m35.874s 00:19:35.095 sys 0m10.818s 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.095 ************************************ 00:19:35.095 END TEST nvmf_perf_adq 00:19:35.095 ************************************ 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.095 ************************************ 00:19:35.095 START TEST nvmf_shutdown 00:19:35.095 ************************************ 00:19:35.095 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:35.355 * Looking for test storage... 00:19:35.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:35.355 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:35.356 ************************************ 00:19:35.356 START TEST nvmf_shutdown_tc1 00:19:35.356 ************************************ 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.356 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.741 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:36.742 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:36.742 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:36.742 Found net devices under 0000:08:00.0: cvl_0_0 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.742 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.003 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.003 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:37.004 Found net devices under 0000:08:00.1: cvl_0_1 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:19:37.004 00:19:37.004 --- 10.0.0.2 ping statistics --- 00:19:37.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.004 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:37.004 00:19:37.004 --- 10.0.0.1 ping statistics --- 00:19:37.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.004 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1535341 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1535341 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1535341 ']' 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.004 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.004 [2024-07-25 10:26:26.712807] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:37.004 [2024-07-25 10:26:26.712904] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.004 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.004 [2024-07-25 10:26:26.779948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.263 [2024-07-25 10:26:26.899348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.263 [2024-07-25 10:26:26.899410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.263 [2024-07-25 10:26:26.899426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.263 [2024-07-25 10:26:26.899440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.263 [2024-07-25 10:26:26.899452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.263 [2024-07-25 10:26:26.899528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.263 [2024-07-25 10:26:26.899609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.263 [2024-07-25 10:26:26.899698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:37.263 [2024-07-25 10:26:26.899731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.263 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.263 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.264 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.264 [2024-07-25 10:26:27.034639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.522 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.522 Malloc1 00:19:37.522 [2024-07-25 10:26:27.107585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.522 Malloc2 00:19:37.522 Malloc3 00:19:37.522 Malloc4 00:19:37.522 Malloc5 00:19:37.781 Malloc6 00:19:37.781 Malloc7 00:19:37.781 Malloc8 00:19:37.781 Malloc9 00:19:37.781 Malloc10 00:19:37.781 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.781 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:37.781 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.781 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1535491 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1535491 /var/tmp/bdevperf.sock 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1535491 ']' 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.782 { 00:19:37.782 "params": { 00:19:37.782 "name": "Nvme$subsystem", 00:19:37.782 "trtype": "$TEST_TRANSPORT", 00:19:37.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.782 "adrfam": "ipv4", 00:19:37.782 "trsvcid": "$NVMF_PORT", 00:19:37.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.782 "hdgst": ${hdgst:-false}, 00:19:37.782 "ddgst": ${ddgst:-false} 00:19:37.782 }, 00:19:37.782 "method": "bdev_nvme_attach_controller" 00:19:37.782 } 00:19:37.782 EOF 00:19:37.782 )") 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.782 { 00:19:37.782 "params": { 00:19:37.782 "name": "Nvme$subsystem", 00:19:37.782 "trtype": "$TEST_TRANSPORT", 00:19:37.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.782 "adrfam": "ipv4", 00:19:37.782 "trsvcid": "$NVMF_PORT", 00:19:37.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.782 "hdgst": ${hdgst:-false}, 00:19:37.782 "ddgst": ${ddgst:-false} 00:19:37.782 }, 00:19:37.782 "method": "bdev_nvme_attach_controller" 00:19:37.782 } 00:19:37.782 EOF 00:19:37.782 )") 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.782 { 00:19:37.782 "params": { 00:19:37.782 "name": "Nvme$subsystem", 00:19:37.782 "trtype": "$TEST_TRANSPORT", 00:19:37.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.782 "adrfam": "ipv4", 00:19:37.782 "trsvcid": "$NVMF_PORT", 00:19:37.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.782 "hdgst": ${hdgst:-false}, 00:19:37.782 "ddgst": ${ddgst:-false} 00:19:37.782 }, 00:19:37.782 "method": "bdev_nvme_attach_controller" 00:19:37.782 } 00:19:37.782 EOF 00:19:37.782 )") 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.782 { 00:19:37.782 "params": { 00:19:37.782 "name": "Nvme$subsystem", 00:19:37.782 "trtype": "$TEST_TRANSPORT", 00:19:37.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.782 "adrfam": "ipv4", 00:19:37.782 "trsvcid": "$NVMF_PORT", 00:19:37.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.782 "hdgst": ${hdgst:-false}, 00:19:37.782 "ddgst": ${ddgst:-false} 00:19:37.782 }, 00:19:37.782 "method": "bdev_nvme_attach_controller" 00:19:37.782 } 00:19:37.782 EOF 00:19:37.782 )") 00:19:37.782 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.041 { 00:19:38.041 "params": { 00:19:38.041 "name": "Nvme$subsystem", 00:19:38.041 "trtype": "$TEST_TRANSPORT", 00:19:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.041 "adrfam": "ipv4", 00:19:38.041 "trsvcid": "$NVMF_PORT", 00:19:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.041 "hdgst": ${hdgst:-false}, 00:19:38.041 "ddgst": ${ddgst:-false} 00:19:38.041 }, 00:19:38.041 "method": "bdev_nvme_attach_controller" 00:19:38.041 } 00:19:38.041 EOF 00:19:38.041 )") 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.041 { 00:19:38.041 "params": { 00:19:38.041 "name": "Nvme$subsystem", 00:19:38.041 "trtype": "$TEST_TRANSPORT", 00:19:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.041 "adrfam": "ipv4", 00:19:38.041 "trsvcid": "$NVMF_PORT", 00:19:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.041 "hdgst": ${hdgst:-false}, 00:19:38.041 "ddgst": ${ddgst:-false} 00:19:38.041 }, 00:19:38.041 "method": "bdev_nvme_attach_controller" 00:19:38.041 } 00:19:38.041 EOF 00:19:38.041 )") 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.041 { 00:19:38.041 "params": { 00:19:38.041 "name": "Nvme$subsystem", 00:19:38.041 "trtype": "$TEST_TRANSPORT", 00:19:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.041 "adrfam": "ipv4", 00:19:38.041 "trsvcid": "$NVMF_PORT", 00:19:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.041 "hdgst": ${hdgst:-false}, 00:19:38.041 "ddgst": ${ddgst:-false} 00:19:38.041 }, 00:19:38.041 "method": "bdev_nvme_attach_controller" 00:19:38.041 } 00:19:38.041 EOF 00:19:38.041 )") 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.041 { 00:19:38.041 "params": { 00:19:38.041 "name": "Nvme$subsystem", 00:19:38.041 "trtype": "$TEST_TRANSPORT", 00:19:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.041 "adrfam": "ipv4", 00:19:38.041 "trsvcid": "$NVMF_PORT", 00:19:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.041 "hdgst": ${hdgst:-false}, 00:19:38.041 "ddgst": ${ddgst:-false} 00:19:38.041 }, 00:19:38.041 "method": "bdev_nvme_attach_controller" 00:19:38.041 } 00:19:38.041 EOF 00:19:38.041 )") 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.041 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.041 { 00:19:38.041 "params": { 00:19:38.041 "name": "Nvme$subsystem", 00:19:38.041 "trtype": "$TEST_TRANSPORT", 00:19:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.041 "adrfam": "ipv4", 00:19:38.041 "trsvcid": "$NVMF_PORT", 00:19:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.042 "hdgst": ${hdgst:-false}, 00:19:38.042 "ddgst": ${ddgst:-false} 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 } 00:19:38.042 EOF 00:19:38.042 )") 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.042 { 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme$subsystem", 00:19:38.042 "trtype": "$TEST_TRANSPORT", 00:19:38.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "$NVMF_PORT", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.042 "hdgst": ${hdgst:-false}, 00:19:38.042 "ddgst": ${ddgst:-false} 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 } 00:19:38.042 EOF 00:19:38.042 )") 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:38.042 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme1", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme2", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme3", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme4", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme5", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme6", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme7", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme8", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme9", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 },{ 00:19:38.042 "params": { 00:19:38.042 "name": "Nvme10", 00:19:38.042 "trtype": "tcp", 00:19:38.042 "traddr": "10.0.0.2", 00:19:38.042 "adrfam": "ipv4", 00:19:38.042 "trsvcid": "4420", 00:19:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:38.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:38.042 "hdgst": false, 00:19:38.042 "ddgst": false 00:19:38.042 }, 00:19:38.042 "method": "bdev_nvme_attach_controller" 00:19:38.042 }' 00:19:38.042 [2024-07-25 10:26:27.590328] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:38.042 [2024-07-25 10:26:27.590418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:38.042 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.042 [2024-07-25 10:26:27.653031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.042 [2024-07-25 10:26:27.770534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1535491 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:39.951 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:40.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1535491 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1535341 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.886 { 00:19:40.886 "params": { 00:19:40.886 "name": "Nvme$subsystem", 00:19:40.886 "trtype": "$TEST_TRANSPORT", 00:19:40.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.886 "adrfam": "ipv4", 00:19:40.886 "trsvcid": "$NVMF_PORT", 00:19:40.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.886 "hdgst": ${hdgst:-false}, 00:19:40.886 "ddgst": ${ddgst:-false} 00:19:40.886 }, 00:19:40.886 "method": "bdev_nvme_attach_controller" 00:19:40.886 } 00:19:40.886 EOF 00:19:40.886 )") 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.886 { 00:19:40.886 "params": { 00:19:40.886 "name": "Nvme$subsystem", 00:19:40.886 "trtype": "$TEST_TRANSPORT", 00:19:40.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.886 "adrfam": "ipv4", 00:19:40.886 "trsvcid": "$NVMF_PORT", 00:19:40.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.886 "hdgst": ${hdgst:-false}, 00:19:40.886 "ddgst": ${ddgst:-false} 00:19:40.886 }, 00:19:40.886 "method": "bdev_nvme_attach_controller" 00:19:40.886 } 00:19:40.886 EOF 00:19:40.886 )") 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.886 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.886 { 00:19:40.886 "params": { 00:19:40.886 "name": "Nvme$subsystem", 00:19:40.886 "trtype": "$TEST_TRANSPORT", 00:19:40.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.886 "adrfam": "ipv4", 00:19:40.886 "trsvcid": "$NVMF_PORT", 00:19:40.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.887 "hdgst": ${hdgst:-false}, 00:19:40.887 "ddgst": ${ddgst:-false} 00:19:40.887 }, 00:19:40.887 "method": "bdev_nvme_attach_controller" 00:19:40.887 } 00:19:40.887 EOF 00:19:40.887 )") 00:19:40.887 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.145 { 00:19:41.145 "params": { 00:19:41.145 "name": "Nvme$subsystem", 00:19:41.145 "trtype": "$TEST_TRANSPORT", 00:19:41.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.145 "adrfam": "ipv4", 00:19:41.145 "trsvcid": "$NVMF_PORT", 00:19:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.145 "hdgst": ${hdgst:-false}, 00:19:41.145 "ddgst": ${ddgst:-false} 00:19:41.145 }, 00:19:41.145 "method": "bdev_nvme_attach_controller" 00:19:41.145 } 00:19:41.145 EOF 00:19:41.145 )") 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.145 { 00:19:41.145 "params": { 00:19:41.145 "name": "Nvme$subsystem", 00:19:41.145 "trtype": "$TEST_TRANSPORT", 00:19:41.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.145 "adrfam": "ipv4", 00:19:41.145 "trsvcid": "$NVMF_PORT", 00:19:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.145 "hdgst": ${hdgst:-false}, 00:19:41.145 "ddgst": ${ddgst:-false} 00:19:41.145 }, 00:19:41.145 "method": "bdev_nvme_attach_controller" 00:19:41.145 } 00:19:41.145 EOF 00:19:41.145 )") 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.145 { 00:19:41.145 "params": { 00:19:41.145 "name": "Nvme$subsystem", 00:19:41.145 "trtype": "$TEST_TRANSPORT", 00:19:41.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.145 "adrfam": "ipv4", 00:19:41.145 "trsvcid": "$NVMF_PORT", 00:19:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.145 "hdgst": ${hdgst:-false}, 00:19:41.145 "ddgst": ${ddgst:-false} 00:19:41.145 }, 00:19:41.145 "method": "bdev_nvme_attach_controller" 00:19:41.145 } 00:19:41.145 EOF 00:19:41.145 )") 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.145 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.145 { 00:19:41.145 "params": { 00:19:41.145 "name": "Nvme$subsystem", 00:19:41.145 "trtype": "$TEST_TRANSPORT", 00:19:41.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.145 "adrfam": "ipv4", 00:19:41.145 "trsvcid": "$NVMF_PORT", 00:19:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.145 "hdgst": ${hdgst:-false}, 00:19:41.145 "ddgst": ${ddgst:-false} 00:19:41.145 }, 00:19:41.145 "method": "bdev_nvme_attach_controller" 00:19:41.145 } 00:19:41.145 EOF 00:19:41.145 )") 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.146 { 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme$subsystem", 00:19:41.146 "trtype": "$TEST_TRANSPORT", 00:19:41.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "$NVMF_PORT", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.146 "hdgst": ${hdgst:-false}, 00:19:41.146 "ddgst": ${ddgst:-false} 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 } 00:19:41.146 EOF 00:19:41.146 )") 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.146 { 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme$subsystem", 00:19:41.146 "trtype": "$TEST_TRANSPORT", 00:19:41.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "$NVMF_PORT", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.146 "hdgst": ${hdgst:-false}, 00:19:41.146 "ddgst": ${ddgst:-false} 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 } 00:19:41.146 EOF 00:19:41.146 )") 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.146 { 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme$subsystem", 00:19:41.146 "trtype": "$TEST_TRANSPORT", 00:19:41.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "$NVMF_PORT", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.146 "hdgst": ${hdgst:-false}, 00:19:41.146 "ddgst": ${ddgst:-false} 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 } 00:19:41.146 EOF 00:19:41.146 )") 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:41.146 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme1", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme2", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme3", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme4", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme5", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme6", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme7", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme8", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme9", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 },{ 00:19:41.146 "params": { 00:19:41.146 "name": "Nvme10", 00:19:41.146 "trtype": "tcp", 00:19:41.146 "traddr": "10.0.0.2", 00:19:41.146 "adrfam": "ipv4", 00:19:41.146 "trsvcid": "4420", 00:19:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:41.146 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:41.146 "hdgst": false, 00:19:41.146 "ddgst": false 00:19:41.146 }, 00:19:41.146 "method": "bdev_nvme_attach_controller" 00:19:41.146 }' 00:19:41.146 [2024-07-25 10:26:30.699638] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:41.146 [2024-07-25 10:26:30.699728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535812 ] 00:19:41.146 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.146 [2024-07-25 10:26:30.765220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.146 [2024-07-25 10:26:30.884304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.057 Running I/O for 1 seconds... 00:19:43.995 00:19:43.995 Latency(us) 00:19:43.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme1n1 : 1.06 180.71 11.29 0.00 0.00 349769.07 32816.55 282727.16 00:19:43.995 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme2n1 : 1.17 164.07 10.25 0.00 0.00 377911.62 24855.13 313796.08 00:19:43.995 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme3n1 : 1.22 209.72 13.11 0.00 0.00 290011.21 19709.35 306028.85 00:19:43.995 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme4n1 : 1.21 210.81 13.18 0.00 0.00 282927.79 30098.01 292047.83 00:19:43.995 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme5n1 : 1.21 215.53 13.47 0.00 0.00 264391.64 31263.10 288940.94 00:19:43.995 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme6n1 : 1.24 207.10 12.94 0.00 0.00 275445.19 21554.06 304475.40 00:19:43.995 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme7n1 : 1.23 212.39 13.27 0.00 0.00 263963.31 1784.04 290494.39 00:19:43.995 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme8n1 : 1.20 164.02 10.25 0.00 0.00 322439.34 7087.60 306028.85 00:19:43.995 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme9n1 : 1.24 209.33 13.08 0.00 0.00 256597.99 3422.44 285834.05 00:19:43.995 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.995 Verification LBA range: start 0x0 length 0x400 00:19:43.995 Nvme10n1 : 1.28 199.61 12.48 0.00 0.00 256665.79 17864.63 351078.78 00:19:43.995 =================================================================================================================== 00:19:43.995 Total : 1973.29 123.33 0.00 0.00 289401.75 1784.04 351078.78 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:44.254 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.255 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:44.255 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.255 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:44.255 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.255 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.255 rmmod nvme_tcp 00:19:44.255 rmmod nvme_fabrics 00:19:44.255 rmmod nvme_keyring 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1535341 ']' 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1535341 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1535341 ']' 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1535341 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535341 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535341' 00:19:44.515 killing process with pid 1535341 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1535341 00:19:44.515 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1535341 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.774 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.775 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.775 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.312 00:19:47.312 real 0m11.559s 00:19:47.312 user 0m34.903s 00:19:47.312 sys 0m2.887s 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:47.312 ************************************ 00:19:47.312 END TEST nvmf_shutdown_tc1 00:19:47.312 ************************************ 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:47.312 ************************************ 00:19:47.312 START TEST nvmf_shutdown_tc2 00:19:47.312 ************************************ 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.312 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:47.313 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:47.313 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:47.313 Found net devices under 0000:08:00.0: cvl_0_0 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:47.313 Found net devices under 0000:08:00.1: cvl_0_1 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.313 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:19:47.314 00:19:47.314 --- 10.0.0.2 ping statistics --- 00:19:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.314 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:19:47.314 00:19:47.314 --- 10.0.0.1 ping statistics --- 00:19:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.314 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1536423 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1536423 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1536423 ']' 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.314 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.314 [2024-07-25 10:26:36.782891] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:47.314 [2024-07-25 10:26:36.782987] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.314 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.314 [2024-07-25 10:26:36.854325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.314 [2024-07-25 10:26:36.975416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.314 [2024-07-25 10:26:36.975490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.314 [2024-07-25 10:26:36.975509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.314 [2024-07-25 10:26:36.975523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.314 [2024-07-25 10:26:36.975535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.314 [2024-07-25 10:26:36.975629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.314 [2024-07-25 10:26:36.975682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.314 [2024-07-25 10:26:36.975735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:47.314 [2024-07-25 10:26:36.975738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.575 [2024-07-25 10:26:37.128799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.575 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.576 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.576 Malloc1 00:19:47.576 [2024-07-25 10:26:37.219417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.576 Malloc2 00:19:47.576 Malloc3 00:19:47.576 Malloc4 00:19:47.836 Malloc5 00:19:47.836 Malloc6 00:19:47.836 Malloc7 00:19:47.836 Malloc8 00:19:47.836 Malloc9 00:19:48.097 Malloc10 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1536565 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1536565 /var/tmp/bdevperf.sock 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1536565 ']' 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.097 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.097 { 00:19:48.097 "params": { 00:19:48.097 "name": "Nvme$subsystem", 00:19:48.097 "trtype": "$TEST_TRANSPORT", 00:19:48.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.097 "adrfam": "ipv4", 00:19:48.097 "trsvcid": "$NVMF_PORT", 00:19:48.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.097 "hdgst": ${hdgst:-false}, 00:19:48.097 "ddgst": ${ddgst:-false} 00:19:48.097 }, 00:19:48.097 "method": "bdev_nvme_attach_controller" 00:19:48.097 } 00:19:48.097 EOF 00:19:48.097 )") 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.098 { 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme$subsystem", 00:19:48.098 "trtype": "$TEST_TRANSPORT", 00:19:48.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "$NVMF_PORT", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.098 "hdgst": ${hdgst:-false}, 00:19:48.098 "ddgst": ${ddgst:-false} 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 } 00:19:48.098 EOF 00:19:48.098 )") 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.098 { 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme$subsystem", 00:19:48.098 "trtype": "$TEST_TRANSPORT", 00:19:48.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "$NVMF_PORT", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.098 "hdgst": ${hdgst:-false}, 00:19:48.098 "ddgst": ${ddgst:-false} 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 } 00:19:48.098 EOF 00:19:48.098 )") 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:48.098 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme1", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme2", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme3", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme4", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme5", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme6", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme7", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme8", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme9", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 },{ 00:19:48.098 "params": { 00:19:48.098 "name": "Nvme10", 00:19:48.098 "trtype": "tcp", 00:19:48.098 "traddr": "10.0.0.2", 00:19:48.098 "adrfam": "ipv4", 00:19:48.098 "trsvcid": "4420", 00:19:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:48.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:48.098 "hdgst": false, 00:19:48.098 "ddgst": false 00:19:48.098 }, 00:19:48.098 "method": "bdev_nvme_attach_controller" 00:19:48.098 }' 00:19:48.098 [2024-07-25 10:26:37.715772] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:48.098 [2024-07-25 10:26:37.715872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536565 ] 00:19:48.098 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.098 [2024-07-25 10:26:37.778808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.357 [2024-07-25 10:26:37.895966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.737 Running I/O for 10 seconds... 00:19:49.995 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.995 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:49.995 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:49.995 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.995 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:50.256 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:50.517 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1536565 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1536565 ']' 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1536565 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536565 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536565' 00:19:50.778 killing process with pid 1536565 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1536565 00:19:50.778 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1536565 00:19:51.037 Received shutdown signal, test time was about 1.152839 seconds 00:19:51.037 00:19:51.037 Latency(us) 00:19:51.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme1n1 : 1.13 170.25 10.64 0.00 0.00 371523.63 24563.86 320009.86 00:19:51.037 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme2n1 : 1.11 178.12 11.13 0.00 0.00 343515.62 2135.99 310689.19 00:19:51.037 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme3n1 : 1.15 222.26 13.89 0.00 0.00 273451.99 21942.42 313796.08 00:19:51.037 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme4n1 : 1.14 227.31 14.21 0.00 0.00 261667.70 20000.62 316902.97 00:19:51.037 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme5n1 : 1.14 168.89 10.56 0.00 0.00 344948.43 25437.68 333990.87 00:19:51.037 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme6n1 : 1.15 222.87 13.93 0.00 0.00 255730.73 20680.25 315349.52 00:19:51.037 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme7n1 : 1.11 177.80 11.11 0.00 0.00 308899.54 7427.41 310689.19 00:19:51.037 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme8n1 : 1.10 174.18 10.89 0.00 0.00 311141.77 21262.79 312242.63 00:19:51.037 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme9n1 : 1.14 168.25 10.52 0.00 0.00 316479.46 24369.68 361952.90 00:19:51.037 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.037 Verification LBA range: start 0x0 length 0x400 00:19:51.037 Nvme10n1 : 1.12 172.18 10.76 0.00 0.00 300751.58 22622.06 285834.05 00:19:51.037 =================================================================================================================== 00:19:51.037 Total : 1882.11 117.63 0.00 0.00 304722.66 2135.99 361952.90 00:19:51.296 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.237 rmmod nvme_tcp 00:19:52.237 rmmod nvme_fabrics 00:19:52.237 rmmod nvme_keyring 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1536423 ']' 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1536423 ']' 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536423' 00:19:52.237 killing process with pid 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1536423 00:19:52.237 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1536423 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.806 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.715 00:19:54.715 real 0m7.817s 00:19:54.715 user 0m23.909s 00:19:54.715 sys 0m1.514s 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.715 ************************************ 00:19:54.715 END TEST nvmf_shutdown_tc2 00:19:54.715 ************************************ 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.715 ************************************ 00:19:54.715 START TEST nvmf_shutdown_tc3 00:19:54.715 ************************************ 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.715 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:54.716 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:54.716 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:54.716 Found net devices under 0000:08:00.0: cvl_0_0 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:54.716 Found net devices under 0000:08:00.1: cvl_0_1 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.716 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.977 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.977 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.977 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.977 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:19:54.978 00:19:54.978 --- 10.0.0.2 ping statistics --- 00:19:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.978 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:19:54.978 00:19:54.978 --- 10.0.0.1 ping statistics --- 00:19:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.978 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1537373 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1537373 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1537373 ']' 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.978 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.978 [2024-07-25 10:26:44.673903] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:54.978 [2024-07-25 10:26:44.674000] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.978 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.978 [2024-07-25 10:26:44.740783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.237 [2024-07-25 10:26:44.861572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.237 [2024-07-25 10:26:44.861639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.237 [2024-07-25 10:26:44.861655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.237 [2024-07-25 10:26:44.861668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.237 [2024-07-25 10:26:44.861679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.237 [2024-07-25 10:26:44.861775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.237 [2024-07-25 10:26:44.861861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.237 [2024-07-25 10:26:44.861944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.237 [2024-07-25 10:26:44.861948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.237 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.237 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:55.237 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.237 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.237 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.237 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.237 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.237 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.237 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.237 [2024-07-25 10:26:45.007639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:55.498 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:55.499 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.499 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.499 Malloc1 00:19:55.499 [2024-07-25 10:26:45.082421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.499 Malloc2 00:19:55.499 Malloc3 00:19:55.499 Malloc4 00:19:55.499 Malloc5 00:19:55.758 Malloc6 00:19:55.758 Malloc7 00:19:55.758 Malloc8 00:19:55.758 Malloc9 00:19:55.758 Malloc10 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1537439 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1537439 /var/tmp/bdevperf.sock 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1537439 ']' 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.758 { 00:19:55.758 "params": { 00:19:55.758 "name": "Nvme$subsystem", 00:19:55.758 "trtype": "$TEST_TRANSPORT", 00:19:55.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.758 "adrfam": "ipv4", 00:19:55.758 "trsvcid": "$NVMF_PORT", 00:19:55.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.758 "hdgst": ${hdgst:-false}, 00:19:55.758 "ddgst": ${ddgst:-false} 00:19:55.758 }, 00:19:55.758 "method": "bdev_nvme_attach_controller" 00:19:55.758 } 00:19:55.758 EOF 00:19:55.758 )") 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.758 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.758 { 00:19:55.758 "params": { 00:19:55.758 "name": "Nvme$subsystem", 00:19:55.758 "trtype": "$TEST_TRANSPORT", 00:19:55.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.758 "adrfam": "ipv4", 00:19:55.758 "trsvcid": "$NVMF_PORT", 00:19:55.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.759 "hdgst": ${hdgst:-false}, 00:19:55.759 "ddgst": ${ddgst:-false} 00:19:55.759 }, 00:19:55.759 "method": "bdev_nvme_attach_controller" 00:19:55.759 } 00:19:55.759 EOF 00:19:55.759 )") 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.759 { 00:19:55.759 "params": { 00:19:55.759 "name": "Nvme$subsystem", 00:19:55.759 "trtype": "$TEST_TRANSPORT", 00:19:55.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.759 "adrfam": "ipv4", 00:19:55.759 "trsvcid": "$NVMF_PORT", 00:19:55.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.759 "hdgst": ${hdgst:-false}, 00:19:55.759 "ddgst": ${ddgst:-false} 00:19:55.759 }, 00:19:55.759 "method": "bdev_nvme_attach_controller" 00:19:55.759 } 00:19:55.759 EOF 00:19:55.759 )") 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.759 { 00:19:55.759 "params": { 00:19:55.759 "name": "Nvme$subsystem", 00:19:55.759 "trtype": "$TEST_TRANSPORT", 00:19:55.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.759 "adrfam": "ipv4", 00:19:55.759 "trsvcid": "$NVMF_PORT", 00:19:55.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.759 "hdgst": ${hdgst:-false}, 00:19:55.759 "ddgst": ${ddgst:-false} 00:19:55.759 }, 00:19:55.759 "method": "bdev_nvme_attach_controller" 00:19:55.759 } 00:19:55.759 EOF 00:19:55.759 )") 00:19:55.759 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.018 { 00:19:56.018 "params": { 00:19:56.018 "name": "Nvme$subsystem", 00:19:56.018 "trtype": "$TEST_TRANSPORT", 00:19:56.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.018 "adrfam": "ipv4", 00:19:56.018 "trsvcid": "$NVMF_PORT", 00:19:56.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.018 "hdgst": ${hdgst:-false}, 00:19:56.018 "ddgst": ${ddgst:-false} 00:19:56.018 }, 00:19:56.018 "method": "bdev_nvme_attach_controller" 00:19:56.018 } 00:19:56.018 EOF 00:19:56.018 )") 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.018 { 00:19:56.018 "params": { 00:19:56.018 "name": "Nvme$subsystem", 00:19:56.018 "trtype": "$TEST_TRANSPORT", 00:19:56.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.018 "adrfam": "ipv4", 00:19:56.018 "trsvcid": "$NVMF_PORT", 00:19:56.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.018 "hdgst": ${hdgst:-false}, 00:19:56.018 "ddgst": ${ddgst:-false} 00:19:56.018 }, 00:19:56.018 "method": "bdev_nvme_attach_controller" 00:19:56.018 } 00:19:56.018 EOF 00:19:56.018 )") 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.018 { 00:19:56.018 "params": { 00:19:56.018 "name": "Nvme$subsystem", 00:19:56.018 "trtype": "$TEST_TRANSPORT", 00:19:56.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.018 "adrfam": "ipv4", 00:19:56.018 "trsvcid": "$NVMF_PORT", 00:19:56.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.018 "hdgst": ${hdgst:-false}, 00:19:56.018 "ddgst": ${ddgst:-false} 00:19:56.018 }, 00:19:56.018 "method": "bdev_nvme_attach_controller" 00:19:56.018 } 00:19:56.018 EOF 00:19:56.018 )") 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.018 { 00:19:56.018 "params": { 00:19:56.018 "name": "Nvme$subsystem", 00:19:56.018 "trtype": "$TEST_TRANSPORT", 00:19:56.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.018 "adrfam": "ipv4", 00:19:56.018 "trsvcid": "$NVMF_PORT", 00:19:56.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.018 "hdgst": ${hdgst:-false}, 00:19:56.018 "ddgst": ${ddgst:-false} 00:19:56.018 }, 00:19:56.018 "method": "bdev_nvme_attach_controller" 00:19:56.018 } 00:19:56.018 EOF 00:19:56.018 )") 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.018 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.018 { 00:19:56.018 "params": { 00:19:56.018 "name": "Nvme$subsystem", 00:19:56.018 "trtype": "$TEST_TRANSPORT", 00:19:56.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.018 "adrfam": "ipv4", 00:19:56.018 "trsvcid": "$NVMF_PORT", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.019 "hdgst": ${hdgst:-false}, 00:19:56.019 "ddgst": ${ddgst:-false} 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 } 00:19:56.019 EOF 00:19:56.019 )") 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.019 { 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme$subsystem", 00:19:56.019 "trtype": "$TEST_TRANSPORT", 00:19:56.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "$NVMF_PORT", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.019 "hdgst": ${hdgst:-false}, 00:19:56.019 "ddgst": ${ddgst:-false} 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 } 00:19:56.019 EOF 00:19:56.019 )") 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:56.019 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme1", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme2", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme3", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme4", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme5", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme6", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme7", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme8", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme9", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 },{ 00:19:56.019 "params": { 00:19:56.019 "name": "Nvme10", 00:19:56.019 "trtype": "tcp", 00:19:56.019 "traddr": "10.0.0.2", 00:19:56.019 "adrfam": "ipv4", 00:19:56.019 "trsvcid": "4420", 00:19:56.019 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:56.019 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:56.019 "hdgst": false, 00:19:56.019 "ddgst": false 00:19:56.019 }, 00:19:56.019 "method": "bdev_nvme_attach_controller" 00:19:56.019 }' 00:19:56.019 [2024-07-25 10:26:45.566980] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:19:56.019 [2024-07-25 10:26:45.567070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537439 ] 00:19:56.019 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.019 [2024-07-25 10:26:45.629626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.019 [2024-07-25 10:26:45.746320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.955 Running I/O for 10 seconds... 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:57.955 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:58.221 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.480 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.755 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1537373 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1537373 ']' 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1537373 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1537373 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1537373' 00:19:58.756 killing process with pid 1537373 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1537373 00:19:58.756 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1537373 00:19:58.756 [2024-07-25 10:26:48.310643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.310999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.311668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fed80 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.312954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16018a0 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.314059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.314086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.314100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.756 [2024-07-25 10:26:48.314113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.314924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff240 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.757 [2024-07-25 10:26:48.316711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.316998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.317234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff700 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.318996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.758 [2024-07-25 10:26:48.319192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.319488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffbe0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.759 [2024-07-25 10:26:48.320885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 10:26:48.320906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.759 the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.759 [2024-07-25 10:26:48.320936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.759 [2024-07-25 10:26:48.320950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.759 [2024-07-25 10:26:48.320964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.759 [2024-07-25 10:26:48.320978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.320988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.759 [2024-07-25 10:26:48.320992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.759 [2024-07-25 10:26:48.321003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.759 [2024-07-25 10:26:48.321014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb690 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-25 10:26:48.321176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:58.760 the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 10:26:48.321191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with [2024-07-25 10:26:48.321208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:19:58.760 id:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26dae00 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16000a0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254bc00 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fad0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.321719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.760 [2024-07-25 10:26:48.321836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.321850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f4b0 is same with the state(5) to be set 00:19:58.760 [2024-07-25 10:26:48.322846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-07-25 10:26:48.322876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.760 [2024-07-25 10:26:48.322905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.760 [2024-07-25 10:26:48.322921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.322939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.322954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.322971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.322985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.323968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.323985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.324009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.324027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.761 [2024-07-25 10:26:48.324041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.324058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1the state(5) to be set 00:19:58.761 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.324076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.761 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.761 [2024-07-25 10:26:48.324093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.761 [2024-07-25 10:26:48.324096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.761 [2024-07-25 10:26:48.324107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1the state(5) to be set 00:19:58.762 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-07-25 10:26:48.324303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.324321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-07-25 10:26:48.324405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.324420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1the state(5) to be set 00:19:58.762 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.762 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.324569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.762 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.762 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1the state(5) to be set 00:19:58.762 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.762 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.762 [2024-07-25 10:26:48.324724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.762 [2024-07-25 10:26:48.324728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.762 [2024-07-25 10:26:48.324737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.324764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with [2024-07-25 10:26:48.324793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1the state(5) to be set 00:19:58.763 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.324807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.324834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-07-25 10:26:48.324862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1[2024-07-25 10:26:48.324899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.324915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.324944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.324971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.324985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a40 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.324999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.763 [2024-07-25 10:26:48.325014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.763 [2024-07-25 10:26:48.325053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:58.763 [2024-07-25 10:26:48.325115] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2781100 was disconnected and freed. reset controller. 00:19:58.763 [2024-07-25 10:26:48.326124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.763 [2024-07-25 10:26:48.326994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.327265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f00 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.328228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.328296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:12the state(5) to be set 00:19:58.764 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.328470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.328522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.764 [2024-07-25 10:26:48.328571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.764 [2024-07-25 10:26:48.328594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.764 [2024-07-25 10:26:48.328598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:1the state(5) to be set 00:19:58.765 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 [2024-07-25 10:26:48.328630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.765 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 [2024-07-25 10:26:48.328659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:1[2024-07-25 10:26:48.328686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.765 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 [2024-07-25 10:26:48.328729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:1the state(5) to be set 00:19:58.765 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 [2024-07-25 10:26:48.328772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.765 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 [2024-07-25 10:26:48.328800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.765 [2024-07-25 10:26:48.328813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:1[2024-07-25 10:26:48.328827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.765 the state(5) to be set 00:19:58.765 [2024-07-25 10:26:48.328842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.766 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.328857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:1[2024-07-25 10:26:48.328870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with [2024-07-25 10:26:48.328885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:58.766 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.328899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.328912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.328926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.328940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.328962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.328976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.328984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.328990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.329002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:26:48.329018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 the state(5) to be set 00:19:58.766 [2024-07-25 10:26:48.329037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.329074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.329128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.329161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.329193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.766 [2024-07-25 10:26:48.329224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.766 [2024-07-25 10:26:48.329239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.329880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.329943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.330018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.330220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.330358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.330441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.359930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.360378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.767 [2024-07-25 10:26:48.360393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.361631] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2517690 was disconnected and freed. reset controller. 00:19:58.767 [2024-07-25 10:26:48.361692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:58.767 [2024-07-25 10:26:48.361836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025610 (9): Bad file descriptor 00:19:58.767 [2024-07-25 10:26:48.361909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb690 (9): Bad file descriptor 00:19:58.767 [2024-07-25 10:26:48.361982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.767 [2024-07-25 10:26:48.362006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.767 [2024-07-25 10:26:48.362023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.767 [2024-07-25 10:26:48.362039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e4ea0 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.362153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26dae00 (9): Bad file descriptor 00:19:58.768 [2024-07-25 10:26:48.362203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d9bb0 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.362413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f140 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.362598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.768 [2024-07-25 10:26:48.362717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.362732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c36f0 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.362764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254bc00 (9): Bad file descriptor 00:19:58.768 [2024-07-25 10:26:48.362797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251fad0 (9): Bad file descriptor 00:19:58.768 [2024-07-25 10:26:48.362827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254f4b0 (9): Bad file descriptor 00:19:58.768 [2024-07-25 10:26:48.365631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:58.768 [2024-07-25 10:26:48.365937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.768 [2024-07-25 10:26:48.365984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2025610 with addr=10.0.0.2, port=4420 00:19:58.768 [2024-07-25 10:26:48.366004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025610 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.366079] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.768 [2024-07-25 10:26:48.366202] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.768 [2024-07-25 10:26:48.366272] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.768 [2024-07-25 10:26:48.366341] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.768 [2024-07-25 10:26:48.367017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.768 [2024-07-25 10:26:48.367049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26dae00 with addr=10.0.0.2, port=4420 00:19:58.768 [2024-07-25 10:26:48.367067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26dae00 is same with the state(5) to be set 00:19:58.768 [2024-07-25 10:26:48.367093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025610 (9): Bad file descriptor 00:19:58.768 [2024-07-25 10:26:48.367276] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.768 [2024-07-25 10:26:48.367347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.768 [2024-07-25 10:26:48.367846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.768 [2024-07-25 10:26:48.367864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.367880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.367898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.367913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.367931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.367945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.367963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.367978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.368968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.368983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.369000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.369015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.369032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.369047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.369065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.769 [2024-07-25 10:26:48.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.769 [2024-07-25 10:26:48.369098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.369526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.369542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2514ef0 is same with the state(5) to be set 00:19:58.770 [2024-07-25 10:26:48.369625] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2514ef0 was disconnected and freed. reset controller. 00:19:58.770 [2024-07-25 10:26:48.369839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26dae00 (9): Bad file descriptor 00:19:58.770 [2024-07-25 10:26:48.369873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:58.770 [2024-07-25 10:26:48.369888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:58.770 [2024-07-25 10:26:48.369906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:58.770 [2024-07-25 10:26:48.370003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.770 [2024-07-25 10:26:48.370773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.770 [2024-07-25 10:26:48.370788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.370975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.370990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.771 [2024-07-25 10:26:48.371812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.771 [2024-07-25 10:26:48.371830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.371845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.371862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.371877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.371894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.371909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.371928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.371943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.371961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.371976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.372179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.372199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660c00 is same with the state(5) to be set 00:19:58.772 [2024-07-25 10:26:48.372280] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2660c00 was disconnected and freed. reset controller. 00:19:58.772 [2024-07-25 10:26:48.373837] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.772 [2024-07-25 10:26:48.373928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.772 [2024-07-25 10:26:48.373960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:58.772 [2024-07-25 10:26:48.373999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e4ea0 (9): Bad file descriptor 00:19:58.772 [2024-07-25 10:26:48.374025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:58.772 [2024-07-25 10:26:48.374041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:58.772 [2024-07-25 10:26:48.374058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:58.772 [2024-07-25 10:26:48.374134] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.772 [2024-07-25 10:26:48.374170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d9bb0 (9): Bad file descriptor 00:19:58.772 [2024-07-25 10:26:48.374205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254f140 (9): Bad file descriptor 00:19:58.772 [2024-07-25 10:26:48.374239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c36f0 (9): Bad file descriptor 00:19:58.772 [2024-07-25 10:26:48.375746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.772 [2024-07-25 10:26:48.375790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:58.772 [2024-07-25 10:26:48.375907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.375932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.375961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.375978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.375997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.772 [2024-07-25 10:26:48.376591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.772 [2024-07-25 10:26:48.376607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.376984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.376999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.773 [2024-07-25 10:26:48.377883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.773 [2024-07-25 10:26:48.377901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.377916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.377933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.377948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.377981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.377998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.378013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.378046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.378063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.378079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.378095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fa00 is same with the state(5) to be set 00:19:58.774 [2024-07-25 10:26:48.379607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.379982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.379997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.774 [2024-07-25 10:26:48.380550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.774 [2024-07-25 10:26:48.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.380983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.380997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.775 [2024-07-25 10:26:48.381773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.775 [2024-07-25 10:26:48.381789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26620b0 is same with the state(5) to be set 00:19:58.775 [2024-07-25 10:26:48.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.383980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.383997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.776 [2024-07-25 10:26:48.384356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.776 [2024-07-25 10:26:48.384371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.384982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.384999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.777 [2024-07-25 10:26:48.385438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.777 [2024-07-25 10:26:48.385456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277e7a0 is same with the state(5) to be set 00:19:58.777 [2024-07-25 10:26:48.387307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.777 [2024-07-25 10:26:48.387372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:58.777 [2024-07-25 10:26:48.387719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.778 [2024-07-25 10:26:48.387778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e4ea0 with addr=10.0.0.2, port=4420 00:19:58.778 [2024-07-25 10:26:48.387800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e4ea0 is same with the state(5) to be set 00:19:58.778 [2024-07-25 10:26:48.387934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.778 [2024-07-25 10:26:48.387961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26eb690 with addr=10.0.0.2, port=4420 00:19:58.778 [2024-07-25 10:26:48.387978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb690 is same with the state(5) to be set 00:19:58.778 [2024-07-25 10:26:48.388063] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.778 [2024-07-25 10:26:48.388088] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.778 [2024-07-25 10:26:48.388110] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.778 [2024-07-25 10:26:48.388137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb690 (9): Bad file descriptor 00:19:58.778 [2024-07-25 10:26:48.388165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e4ea0 (9): Bad file descriptor 00:19:58.778 [2024-07-25 10:26:48.388584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:58.778 [2024-07-25 10:26:48.388613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:58.778 [2024-07-25 10:26:48.388632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:58.778 [2024-07-25 10:26:48.388801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.778 [2024-07-25 10:26:48.388830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251fad0 with addr=10.0.0.2, port=4420 00:19:58.778 [2024-07-25 10:26:48.388847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fad0 is same with the state(5) to be set 00:19:58.778 [2024-07-25 10:26:48.388994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.778 [2024-07-25 10:26:48.389021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254bc00 with addr=10.0.0.2, port=4420 00:19:58.778 [2024-07-25 10:26:48.389038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254bc00 is same with the state(5) to be set 00:19:58.778 [2024-07-25 10:26:48.390069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.390974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.390992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.391006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.391024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.391038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.778 [2024-07-25 10:26:48.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.778 [2024-07-25 10:26:48.391091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.391981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.391996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.392013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.392028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.392045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.392060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.392077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.779 [2024-07-25 10:26:48.392092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.779 [2024-07-25 10:26:48.392109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.392123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.392141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.392173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.392188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.392205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.392220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.392237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277fc50 is same with the state(5) to be set 00:19:58.780 [2024-07-25 10:26:48.393737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.393977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.393992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.780 [2024-07-25 10:26:48.394814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.780 [2024-07-25 10:26:48.394829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.394846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.394867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.394884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.394900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.394917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.394932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.394949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.394964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.394982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.394997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.781 [2024-07-25 10:26:48.395794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.781 [2024-07-25 10:26:48.395811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.395826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.395843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.395858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.395875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.395906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27825f0 is same with the state(5) to be set 00:19:58.782 [2024-07-25 10:26:48.397410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.397980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.397997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.782 [2024-07-25 10:26:48.398496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-25 10:26:48.398512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.398976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.398993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.783 [2024-07-25 10:26:48.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.783 [2024-07-25 10:26:48.399594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25161c0 is same with the state(5) to be set 00:19:58.783 [2024-07-25 10:26:48.401365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:58.783 [2024-07-25 10:26:48.401422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:58.783 task offset: 16384 on job bdev=Nvme6n1 fails 00:19:58.783 00:19:58.783 Latency(us) 00:19:58.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.783 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.783 Job: Nvme1n1 ended in about 1.00 seconds with error 00:19:58.783 Verification LBA range: start 0x0 length 0x400 00:19:58.783 Nvme1n1 : 1.00 127.39 7.96 63.69 0.00 330803.33 22719.15 299815.06 00:19:58.783 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.783 Job: Nvme2n1 ended in about 1.00 seconds with error 00:19:58.783 Verification LBA range: start 0x0 length 0x400 00:19:58.783 Nvme2n1 : 1.00 127.87 7.99 63.93 0.00 321940.54 32039.82 309135.74 00:19:58.783 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.783 Job: Nvme3n1 ended in about 1.01 seconds with error 00:19:58.783 Verification LBA range: start 0x0 length 0x400 00:19:58.783 Nvme3n1 : 1.01 126.92 7.93 63.46 0.00 316920.41 23884.23 351078.78 00:19:58.783 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.783 Job: Nvme4n1 ended in about 1.01 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme4n1 : 1.01 126.46 7.90 63.23 0.00 310504.11 28156.21 302921.96 00:19:58.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme5n1 ended in about 1.02 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme5n1 : 1.02 125.62 7.85 62.81 0.00 305135.31 26214.40 296708.17 00:19:58.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme6n1 ended in about 0.95 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme6n1 : 0.95 134.26 8.39 67.13 0.00 275719.46 31263.10 329330.54 00:19:58.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme7n1 ended in about 1.02 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme7n1 : 1.02 125.18 7.82 62.59 0.00 291170.48 25826.04 333990.87 00:19:58.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme8n1 ended in about 1.00 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme8n1 : 1.00 128.11 8.01 64.05 0.00 275919.45 24369.68 304475.40 00:19:58.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme9n1 ended in about 1.03 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme9n1 : 1.03 124.73 7.80 62.36 0.00 277641.54 18155.90 276513.37 00:19:58.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.784 Job: Nvme10n1 ended in about 0.99 seconds with error 00:19:58.784 Verification LBA range: start 0x0 length 0x400 00:19:58.784 Nvme10n1 : 0.99 129.27 8.08 64.64 0.00 258367.59 28544.57 340204.66 00:19:58.784 =================================================================================================================== 00:19:58.784 Total : 1275.80 79.74 637.90 0.00 296412.22 18155.90 351078.78 00:19:58.784 [2024-07-25 10:26:48.429257] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:58.784 [2024-07-25 10:26:48.429354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:58.784 [2024-07-25 10:26:48.429752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.429793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254f4b0 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.429814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f4b0 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.429967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.429993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2025610 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.430010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025610 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.430171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.430200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26dae00 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.430217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26dae00 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.430244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251fad0 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.430277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254bc00 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.430296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.430310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.430335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:58.784 [2024-07-25 10:26:48.430365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.430380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.430395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:58.784 [2024-07-25 10:26:48.430458] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.430490] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.430512] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.430536] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.430557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26dae00 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.430586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025610 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.430611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254f4b0 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.430782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.784 [2024-07-25 10:26:48.430806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.784 [2024-07-25 10:26:48.430976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.431003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254f140 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.431039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f140 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.431201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.431227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c36f0 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.431245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c36f0 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.431391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.784 [2024-07-25 10:26:48.431417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d9bb0 with addr=10.0.0.2, port=4420 00:19:58.784 [2024-07-25 10:26:48.431433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d9bb0 is same with the state(5) to be set 00:19:58.784 [2024-07-25 10:26:48.431455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.431469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.431490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.784 [2024-07-25 10:26:48.431513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.431528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.431543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:58.784 [2024-07-25 10:26:48.431600] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.431626] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.431647] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.431668] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.431689] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.784 [2024-07-25 10:26:48.432704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.784 [2024-07-25 10:26:48.432733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.784 [2024-07-25 10:26:48.432769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254f140 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.432796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c36f0 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.432818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d9bb0 (9): Bad file descriptor 00:19:58.784 [2024-07-25 10:26:48.432835] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.432849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.432865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:58.784 [2024-07-25 10:26:48.432887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:58.784 [2024-07-25 10:26:48.432903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:58.784 [2024-07-25 10:26:48.432919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:58.785 [2024-07-25 10:26:48.432938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.432959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.432974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:58.785 [2024-07-25 10:26:48.433376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:58.785 [2024-07-25 10:26:48.433411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:58.785 [2024-07-25 10:26:48.433430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.433534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.433548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:58.785 [2024-07-25 10:26:48.433567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.433588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.433602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:58.785 [2024-07-25 10:26:48.433619] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.433634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.433648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:58.785 [2024-07-25 10:26:48.433723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.433984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.785 [2024-07-25 10:26:48.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26eb690 with addr=10.0.0.2, port=4420 00:19:58.785 [2024-07-25 10:26:48.434036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb690 is same with the state(5) to be set 00:19:58.785 [2024-07-25 10:26:48.434157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.785 [2024-07-25 10:26:48.434184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e4ea0 with addr=10.0.0.2, port=4420 00:19:58.785 [2024-07-25 10:26:48.434201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e4ea0 is same with the state(5) to be set 00:19:58.785 [2024-07-25 10:26:48.434249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb690 (9): Bad file descriptor 00:19:58.785 [2024-07-25 10:26:48.434275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e4ea0 (9): Bad file descriptor 00:19:58.785 [2024-07-25 10:26:48.434320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.434339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.434353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:58.785 [2024-07-25 10:26:48.434378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:58.785 [2024-07-25 10:26:48.434394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:58.785 [2024-07-25 10:26:48.434409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:58.785 [2024-07-25 10:26:48.434451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.785 [2024-07-25 10:26:48.434470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.044 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:59.044 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:59.981 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1537439 00:19:59.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1537439) - No such process 00:19:59.981 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:59.981 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:59.981 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:00.241 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.241 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.241 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:00.241 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.241 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.242 rmmod nvme_tcp 00:20:00.242 rmmod nvme_fabrics 00:20:00.242 rmmod nvme_keyring 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.242 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.147 00:20:02.147 real 0m7.436s 00:20:02.147 user 0m18.001s 00:20:02.147 sys 0m1.414s 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.147 ************************************ 00:20:02.147 END TEST nvmf_shutdown_tc3 00:20:02.147 ************************************ 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:02.147 00:20:02.147 real 0m27.043s 00:20:02.147 user 1m16.904s 00:20:02.147 sys 0m5.968s 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.147 ************************************ 00:20:02.147 END TEST nvmf_shutdown 00:20:02.147 ************************************ 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:02.147 00:20:02.147 real 10m45.458s 00:20:02.147 user 25m57.117s 00:20:02.147 sys 2m25.882s 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.147 10:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.147 ************************************ 00:20:02.147 END TEST nvmf_target_extra 00:20:02.147 ************************************ 00:20:02.407 10:26:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:02.407 10:26:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:02.407 10:26:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.407 10:26:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.407 ************************************ 00:20:02.407 START TEST nvmf_host 00:20:02.407 ************************************ 00:20:02.407 10:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:02.407 * Looking for test storage... 00:20:02.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.407 ************************************ 00:20:02.407 START TEST nvmf_multicontroller 00:20:02.407 ************************************ 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:02.407 * Looking for test storage... 00:20:02.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.407 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.408 10:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.318 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:04.319 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:04.319 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:04.319 Found net devices under 0000:08:00.0: cvl_0_0 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:04.319 Found net devices under 0000:08:00.1: cvl_0_1 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:20:04.319 00:20:04.319 --- 10.0.0.2 ping statistics --- 00:20:04.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.319 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:04.319 00:20:04.319 --- 10.0.0.1 ping statistics --- 00:20:04.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.319 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1539435 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1539435 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1539435 ']' 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.319 10:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.320 [2024-07-25 10:26:53.946214] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:04.320 [2024-07-25 10:26:53.946311] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.320 [2024-07-25 10:26:54.011259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.579 [2024-07-25 10:26:54.127738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.579 [2024-07-25 10:26:54.127804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.579 [2024-07-25 10:26:54.127820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.579 [2024-07-25 10:26:54.127834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.579 [2024-07-25 10:26:54.127845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.579 [2024-07-25 10:26:54.127937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.579 [2024-07-25 10:26:54.127994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.579 [2024-07-25 10:26:54.127997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.579 [2024-07-25 10:26:54.269436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.579 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.579 Malloc0 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.580 [2024-07-25 10:26:54.328381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.580 [2024-07-25 10:26:54.336269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.580 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.839 Malloc1 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1539460 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1539460 /var/tmp/bdevperf.sock 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1539460 ']' 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.839 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.098 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.098 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:05.098 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:05.098 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.098 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.356 NVMe0n1 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.357 1 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.357 request: 00:20:05.357 { 00:20:05.357 "name": "NVMe0", 00:20:05.357 "trtype": "tcp", 00:20:05.357 "traddr": "10.0.0.2", 00:20:05.357 "adrfam": "ipv4", 00:20:05.357 "trsvcid": "4420", 00:20:05.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.357 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:05.357 "hostaddr": "10.0.0.2", 00:20:05.357 "hostsvcid": "60000", 00:20:05.357 "prchk_reftag": false, 00:20:05.357 "prchk_guard": false, 00:20:05.357 "hdgst": false, 00:20:05.357 "ddgst": false, 00:20:05.357 "method": "bdev_nvme_attach_controller", 00:20:05.357 "req_id": 1 00:20:05.357 } 00:20:05.357 Got JSON-RPC error response 00:20:05.357 response: 00:20:05.357 { 00:20:05.357 "code": -114, 00:20:05.357 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.357 } 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.357 request: 00:20:05.357 { 00:20:05.357 "name": "NVMe0", 00:20:05.357 "trtype": "tcp", 00:20:05.357 "traddr": "10.0.0.2", 00:20:05.357 "adrfam": "ipv4", 00:20:05.357 "trsvcid": "4420", 00:20:05.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:05.357 "hostaddr": "10.0.0.2", 00:20:05.357 "hostsvcid": "60000", 00:20:05.357 "prchk_reftag": false, 00:20:05.357 "prchk_guard": false, 00:20:05.357 "hdgst": false, 00:20:05.357 "ddgst": false, 00:20:05.357 "method": "bdev_nvme_attach_controller", 00:20:05.357 "req_id": 1 00:20:05.357 } 00:20:05.357 Got JSON-RPC error response 00:20:05.357 response: 00:20:05.357 { 00:20:05.357 "code": -114, 00:20:05.357 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.357 } 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.357 request: 00:20:05.357 { 00:20:05.357 "name": "NVMe0", 00:20:05.357 "trtype": "tcp", 00:20:05.357 "traddr": "10.0.0.2", 00:20:05.357 "adrfam": "ipv4", 00:20:05.357 "trsvcid": "4420", 00:20:05.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.357 "hostaddr": "10.0.0.2", 00:20:05.357 "hostsvcid": "60000", 00:20:05.357 "prchk_reftag": false, 00:20:05.357 "prchk_guard": false, 00:20:05.357 "hdgst": false, 00:20:05.357 "ddgst": false, 00:20:05.357 "multipath": "disable", 00:20:05.357 "method": "bdev_nvme_attach_controller", 00:20:05.357 "req_id": 1 00:20:05.357 } 00:20:05.357 Got JSON-RPC error response 00:20:05.357 response: 00:20:05.357 { 00:20:05.357 "code": -114, 00:20:05.357 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:05.357 } 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:05.357 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.358 request: 00:20:05.358 { 00:20:05.358 "name": "NVMe0", 00:20:05.358 "trtype": "tcp", 00:20:05.358 "traddr": "10.0.0.2", 00:20:05.358 "adrfam": "ipv4", 00:20:05.358 "trsvcid": "4420", 00:20:05.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.358 "hostaddr": "10.0.0.2", 00:20:05.358 "hostsvcid": "60000", 00:20:05.358 "prchk_reftag": false, 00:20:05.358 "prchk_guard": false, 00:20:05.358 "hdgst": false, 00:20:05.358 "ddgst": false, 00:20:05.358 "multipath": "failover", 00:20:05.358 "method": "bdev_nvme_attach_controller", 00:20:05.358 "req_id": 1 00:20:05.358 } 00:20:05.358 Got JSON-RPC error response 00:20:05.358 response: 00:20:05.358 { 00:20:05.358 "code": -114, 00:20:05.358 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.358 } 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.358 10:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.358 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.358 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.618 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:05.618 10:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.998 0 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1539460 ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539460' 00:20:06.998 killing process with pid 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1539460 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:06.998 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.998 [2024-07-25 10:26:54.440964] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:06.998 [2024-07-25 10:26:54.441067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539460 ] 00:20:06.998 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.998 [2024-07-25 10:26:54.502148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.998 [2024-07-25 10:26:54.619149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.998 [2024-07-25 10:26:55.195301] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 3b370546-6bce-455b-b6b3-b5cfaa2eda84 already exists 00:20:06.998 [2024-07-25 10:26:55.195345] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:3b370546-6bce-455b-b6b3-b5cfaa2eda84 alias for bdev NVMe1n1 00:20:06.998 [2024-07-25 10:26:55.195362] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:06.998 Running I/O for 1 seconds... 00:20:06.998 00:20:06.998 Latency(us) 00:20:06.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.998 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:06.998 NVMe0n1 : 1.00 16694.03 65.21 0.00 0.00 7654.48 6602.15 17476.27 00:20:06.998 =================================================================================================================== 00:20:06.998 Total : 16694.03 65.21 0.00 0.00 7654.48 6602.15 17476.27 00:20:06.998 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.998 00:20:06.998 Latency(us) 00:20:06.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.998 =================================================================================================================== 00:20:06.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.998 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.998 rmmod nvme_tcp 00:20:06.998 rmmod nvme_fabrics 00:20:06.998 rmmod nvme_keyring 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1539435 ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1539435 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1539435 ']' 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1539435 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:06.998 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539435 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539435' 00:20:06.999 killing process with pid 1539435 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1539435 00:20:06.999 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1539435 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.259 10:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.800 00:20:09.800 real 0m6.959s 00:20:09.800 user 0m11.369s 00:20:09.800 sys 0m1.966s 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.800 ************************************ 00:20:09.800 END TEST nvmf_multicontroller 00:20:09.800 ************************************ 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.800 ************************************ 00:20:09.800 START TEST nvmf_aer 00:20:09.800 ************************************ 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.800 * Looking for test storage... 00:20:09.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.800 10:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:11.177 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:11.177 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.177 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:11.178 Found net devices under 0000:08:00.0: cvl_0_0 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:11.178 Found net devices under 0000:08:00.1: cvl_0_1 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:20:11.178 00:20:11.178 --- 10.0.0.2 ping statistics --- 00:20:11.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.178 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:20:11.178 00:20:11.178 --- 10.0.0.1 ping statistics --- 00:20:11.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.178 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1541205 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1541205 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1541205 ']' 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.178 10:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.178 [2024-07-25 10:27:00.898270] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:11.178 [2024-07-25 10:27:00.898366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.178 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.437 [2024-07-25 10:27:00.964524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.437 [2024-07-25 10:27:01.082080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.437 [2024-07-25 10:27:01.082139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.437 [2024-07-25 10:27:01.082155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.437 [2024-07-25 10:27:01.082170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.437 [2024-07-25 10:27:01.082182] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.437 [2024-07-25 10:27:01.082269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.437 [2024-07-25 10:27:01.082342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.437 [2024-07-25 10:27:01.082404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.437 [2024-07-25 10:27:01.082407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.437 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.437 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:20:11.437 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.437 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.437 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 [2024-07-25 10:27:01.231766] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 Malloc0 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 [2024-07-25 10:27:01.282274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.697 [ 00:20:11.697 { 00:20:11.697 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.697 "subtype": "Discovery", 00:20:11.697 "listen_addresses": [], 00:20:11.697 "allow_any_host": true, 00:20:11.697 "hosts": [] 00:20:11.697 }, 00:20:11.697 { 00:20:11.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.697 "subtype": "NVMe", 00:20:11.697 "listen_addresses": [ 00:20:11.697 { 00:20:11.697 "trtype": "TCP", 00:20:11.697 "adrfam": "IPv4", 00:20:11.697 "traddr": "10.0.0.2", 00:20:11.697 "trsvcid": "4420" 00:20:11.697 } 00:20:11.697 ], 00:20:11.697 "allow_any_host": true, 00:20:11.697 "hosts": [], 00:20:11.697 "serial_number": "SPDK00000000000001", 00:20:11.697 "model_number": "SPDK bdev Controller", 00:20:11.697 "max_namespaces": 2, 00:20:11.697 "min_cntlid": 1, 00:20:11.697 "max_cntlid": 65519, 00:20:11.697 "namespaces": [ 00:20:11.697 { 00:20:11.697 "nsid": 1, 00:20:11.697 "bdev_name": "Malloc0", 00:20:11.697 "name": "Malloc0", 00:20:11.697 "nguid": "0128CB3CEBF64802A662F9E8216DFCC2", 00:20:11.697 "uuid": "0128cb3c-ebf6-4802-a662-f9e8216dfcc2" 00:20:11.697 } 00:20:11.697 ] 00:20:11.697 } 00:20:11.697 ] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1541344 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:11.697 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:11.697 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.958 Malloc1 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:11.958 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 [ 00:20:11.959 { 00:20:11.959 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.959 "subtype": "Discovery", 00:20:11.959 "listen_addresses": [], 00:20:11.959 "allow_any_host": true, 00:20:11.959 "hosts": [] 00:20:11.959 }, 00:20:11.959 { 00:20:11.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.959 "subtype": "NVMe", 00:20:11.959 "listen_addresses": [ 00:20:11.959 { 00:20:11.959 "trtype": "TCP", 00:20:11.959 "adrfam": "IPv4", 00:20:11.959 "traddr": "10.0.0.2", 00:20:11.959 "trsvcid": "4420" 00:20:11.959 } 00:20:11.959 ], 00:20:11.959 "allow_any_host": true, 00:20:11.959 "hosts": [], 00:20:11.959 "serial_number": "SPDK00000000000001", 00:20:11.959 "model_number": "SPDK bdev Controller", 00:20:11.959 "max_namespaces": 2, 00:20:11.959 "min_cntlid": 1, 00:20:11.959 "max_cntlid": 65519, 00:20:11.959 "namespaces": [ 00:20:11.959 { 00:20:11.959 "nsid": 1, 00:20:11.959 "bdev_name": "Malloc0", 00:20:11.959 "name": "Malloc0", 00:20:11.959 "nguid": "0128CB3CEBF64802A662F9E8216DFCC2", 00:20:11.959 "uuid": "0128cb3c-ebf6-4802-a662-f9e8216dfcc2" 00:20:11.959 }, 00:20:11.959 { 00:20:11.959 "nsid": 2, 00:20:11.959 "bdev_name": "Malloc1", 00:20:11.959 "name": "Malloc1", 00:20:11.959 "nguid": "8C5661B704AA4C37B1D98E0CF3285FE5", 00:20:11.959 "uuid": "8c5661b7-04aa-4c37-b1d9-8e0cf3285fe5" 00:20:11.959 } 00:20:11.959 ] 00:20:11.959 Asynchronous Event Request test 00:20:11.959 Attaching to 10.0.0.2 00:20:11.959 Attached to 10.0.0.2 00:20:11.959 Registering asynchronous event callbacks... 00:20:11.959 Starting namespace attribute notice tests for all controllers... 00:20:11.959 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:11.959 aer_cb - Changed Namespace 00:20:11.959 Cleaning up... 00:20:11.959 } 00:20:11.959 ] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1541344 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.959 rmmod nvme_tcp 00:20:11.959 rmmod nvme_fabrics 00:20:11.959 rmmod nvme_keyring 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1541205 ']' 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1541205 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1541205 ']' 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1541205 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541205 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541205' 00:20:11.959 killing process with pid 1541205 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1541205 00:20:11.959 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1541205 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.220 10:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.760 00:20:14.760 real 0m4.907s 00:20:14.760 user 0m3.879s 00:20:14.760 sys 0m1.551s 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 ************************************ 00:20:14.760 END TEST nvmf_aer 00:20:14.760 ************************************ 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:14.760 10:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 ************************************ 00:20:14.760 START TEST nvmf_async_init 00:20:14.760 ************************************ 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.760 * Looking for test storage... 00:20:14.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=035e03662609485695479c03f2530304 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.760 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.761 10:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:16.143 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:16.143 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:16.143 Found net devices under 0000:08:00.0: cvl_0_0 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:16.143 Found net devices under 0000:08:00.1: cvl_0_1 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.143 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:20:16.144 00:20:16.144 --- 10.0.0.2 ping statistics --- 00:20:16.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.144 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:16.144 00:20:16.144 --- 10.0.0.1 ping statistics --- 00:20:16.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.144 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1542889 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1542889 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1542889 ']' 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.144 10:27:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.144 [2024-07-25 10:27:05.911134] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:16.144 [2024-07-25 10:27:05.911224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.403 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.403 [2024-07-25 10:27:05.975924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.403 [2024-07-25 10:27:06.091241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.403 [2024-07-25 10:27:06.091307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.403 [2024-07-25 10:27:06.091322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.403 [2024-07-25 10:27:06.091337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.403 [2024-07-25 10:27:06.091349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.403 [2024-07-25 10:27:06.091379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.663 [2024-07-25 10:27:06.221129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.663 null0 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.663 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 035e03662609485695479c03f2530304 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.664 [2024-07-25 10:27:06.261369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.664 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 nvme0n1 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 [ 00:20:16.924 { 00:20:16.924 "name": "nvme0n1", 00:20:16.924 "aliases": [ 00:20:16.924 "035e0366-2609-4856-9547-9c03f2530304" 00:20:16.924 ], 00:20:16.924 "product_name": "NVMe disk", 00:20:16.924 "block_size": 512, 00:20:16.924 "num_blocks": 2097152, 00:20:16.924 "uuid": "035e0366-2609-4856-9547-9c03f2530304", 00:20:16.924 "assigned_rate_limits": { 00:20:16.924 "rw_ios_per_sec": 0, 00:20:16.924 "rw_mbytes_per_sec": 0, 00:20:16.924 "r_mbytes_per_sec": 0, 00:20:16.924 "w_mbytes_per_sec": 0 00:20:16.924 }, 00:20:16.924 "claimed": false, 00:20:16.924 "zoned": false, 00:20:16.924 "supported_io_types": { 00:20:16.924 "read": true, 00:20:16.924 "write": true, 00:20:16.924 "unmap": false, 00:20:16.924 "flush": true, 00:20:16.924 "reset": true, 00:20:16.924 "nvme_admin": true, 00:20:16.924 "nvme_io": true, 00:20:16.924 "nvme_io_md": false, 00:20:16.924 "write_zeroes": true, 00:20:16.924 "zcopy": false, 00:20:16.924 "get_zone_info": false, 00:20:16.924 "zone_management": false, 00:20:16.924 "zone_append": false, 00:20:16.924 "compare": true, 00:20:16.924 "compare_and_write": true, 00:20:16.924 "abort": true, 00:20:16.924 "seek_hole": false, 00:20:16.924 "seek_data": false, 00:20:16.924 "copy": true, 00:20:16.924 "nvme_iov_md": false 00:20:16.924 }, 00:20:16.924 "memory_domains": [ 00:20:16.924 { 00:20:16.924 "dma_device_id": "system", 00:20:16.924 "dma_device_type": 1 00:20:16.924 } 00:20:16.924 ], 00:20:16.924 "driver_specific": { 00:20:16.924 "nvme": [ 00:20:16.924 { 00:20:16.924 "trid": { 00:20:16.924 "trtype": "TCP", 00:20:16.924 "adrfam": "IPv4", 00:20:16.924 "traddr": "10.0.0.2", 00:20:16.924 "trsvcid": "4420", 00:20:16.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:16.924 }, 00:20:16.924 "ctrlr_data": { 00:20:16.924 "cntlid": 1, 00:20:16.924 "vendor_id": "0x8086", 00:20:16.924 "model_number": "SPDK bdev Controller", 00:20:16.924 "serial_number": "00000000000000000000", 00:20:16.924 "firmware_revision": "24.09", 00:20:16.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.924 "oacs": { 00:20:16.924 "security": 0, 00:20:16.924 "format": 0, 00:20:16.924 "firmware": 0, 00:20:16.924 "ns_manage": 0 00:20:16.924 }, 00:20:16.924 "multi_ctrlr": true, 00:20:16.924 "ana_reporting": false 00:20:16.924 }, 00:20:16.924 "vs": { 00:20:16.924 "nvme_version": "1.3" 00:20:16.924 }, 00:20:16.924 "ns_data": { 00:20:16.924 "id": 1, 00:20:16.924 "can_share": true 00:20:16.924 } 00:20:16.924 } 00:20:16.924 ], 00:20:16.924 "mp_policy": "active_passive" 00:20:16.924 } 00:20:16.924 } 00:20:16.924 ] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 [2024-07-25 10:27:06.514570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:16.924 [2024-07-25 10:27:06.514656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239f3d0 (9): Bad file descriptor 00:20:16.924 [2024-07-25 10:27:06.656641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 [ 00:20:16.924 { 00:20:16.924 "name": "nvme0n1", 00:20:16.924 "aliases": [ 00:20:16.924 "035e0366-2609-4856-9547-9c03f2530304" 00:20:16.924 ], 00:20:16.924 "product_name": "NVMe disk", 00:20:16.924 "block_size": 512, 00:20:16.924 "num_blocks": 2097152, 00:20:16.924 "uuid": "035e0366-2609-4856-9547-9c03f2530304", 00:20:16.924 "assigned_rate_limits": { 00:20:16.924 "rw_ios_per_sec": 0, 00:20:16.924 "rw_mbytes_per_sec": 0, 00:20:16.924 "r_mbytes_per_sec": 0, 00:20:16.924 "w_mbytes_per_sec": 0 00:20:16.924 }, 00:20:16.924 "claimed": false, 00:20:16.924 "zoned": false, 00:20:16.924 "supported_io_types": { 00:20:16.924 "read": true, 00:20:16.924 "write": true, 00:20:16.924 "unmap": false, 00:20:16.924 "flush": true, 00:20:16.924 "reset": true, 00:20:16.924 "nvme_admin": true, 00:20:16.924 "nvme_io": true, 00:20:16.924 "nvme_io_md": false, 00:20:16.924 "write_zeroes": true, 00:20:16.924 "zcopy": false, 00:20:16.924 "get_zone_info": false, 00:20:16.924 "zone_management": false, 00:20:16.924 "zone_append": false, 00:20:16.924 "compare": true, 00:20:16.924 "compare_and_write": true, 00:20:16.924 "abort": true, 00:20:16.924 "seek_hole": false, 00:20:16.924 "seek_data": false, 00:20:16.924 "copy": true, 00:20:16.924 "nvme_iov_md": false 00:20:16.924 }, 00:20:16.924 "memory_domains": [ 00:20:16.924 { 00:20:16.924 "dma_device_id": "system", 00:20:16.924 "dma_device_type": 1 00:20:16.924 } 00:20:16.924 ], 00:20:16.924 "driver_specific": { 00:20:16.924 "nvme": [ 00:20:16.924 { 00:20:16.924 "trid": { 00:20:16.924 "trtype": "TCP", 00:20:16.924 "adrfam": "IPv4", 00:20:16.924 "traddr": "10.0.0.2", 00:20:16.924 "trsvcid": "4420", 00:20:16.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:16.924 }, 00:20:16.924 "ctrlr_data": { 00:20:16.924 "cntlid": 2, 00:20:16.924 "vendor_id": "0x8086", 00:20:16.924 "model_number": "SPDK bdev Controller", 00:20:16.924 "serial_number": "00000000000000000000", 00:20:16.924 "firmware_revision": "24.09", 00:20:16.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.924 "oacs": { 00:20:16.924 "security": 0, 00:20:16.924 "format": 0, 00:20:16.924 "firmware": 0, 00:20:16.924 "ns_manage": 0 00:20:16.924 }, 00:20:16.924 "multi_ctrlr": true, 00:20:16.924 "ana_reporting": false 00:20:16.924 }, 00:20:16.924 "vs": { 00:20:16.924 "nvme_version": "1.3" 00:20:16.924 }, 00:20:16.924 "ns_data": { 00:20:16.924 "id": 1, 00:20:16.924 "can_share": true 00:20:16.924 } 00:20:16.924 } 00:20:16.924 ], 00:20:16.924 "mp_policy": "active_passive" 00:20:16.924 } 00:20:16.924 } 00:20:16.924 ] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7D0ujogy1J 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:16.924 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7D0ujogy1J 00:20:16.925 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:16.925 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.925 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 [2024-07-25 10:27:06.711297] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.185 [2024-07-25 10:27:06.711430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7D0ujogy1J 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 [2024-07-25 10:27:06.719307] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7D0ujogy1J 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 [2024-07-25 10:27:06.727347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.185 [2024-07-25 10:27:06.727405] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:17.185 nvme0n1 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.185 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 [ 00:20:17.185 { 00:20:17.185 "name": "nvme0n1", 00:20:17.185 "aliases": [ 00:20:17.185 "035e0366-2609-4856-9547-9c03f2530304" 00:20:17.185 ], 00:20:17.185 "product_name": "NVMe disk", 00:20:17.185 "block_size": 512, 00:20:17.185 "num_blocks": 2097152, 00:20:17.185 "uuid": "035e0366-2609-4856-9547-9c03f2530304", 00:20:17.185 "assigned_rate_limits": { 00:20:17.185 "rw_ios_per_sec": 0, 00:20:17.185 "rw_mbytes_per_sec": 0, 00:20:17.185 "r_mbytes_per_sec": 0, 00:20:17.185 "w_mbytes_per_sec": 0 00:20:17.185 }, 00:20:17.185 "claimed": false, 00:20:17.185 "zoned": false, 00:20:17.185 "supported_io_types": { 00:20:17.185 "read": true, 00:20:17.185 "write": true, 00:20:17.185 "unmap": false, 00:20:17.185 "flush": true, 00:20:17.185 "reset": true, 00:20:17.185 "nvme_admin": true, 00:20:17.185 "nvme_io": true, 00:20:17.185 "nvme_io_md": false, 00:20:17.185 "write_zeroes": true, 00:20:17.185 "zcopy": false, 00:20:17.185 "get_zone_info": false, 00:20:17.185 "zone_management": false, 00:20:17.185 "zone_append": false, 00:20:17.185 "compare": true, 00:20:17.185 "compare_and_write": true, 00:20:17.185 "abort": true, 00:20:17.185 "seek_hole": false, 00:20:17.185 "seek_data": false, 00:20:17.185 "copy": true, 00:20:17.185 "nvme_iov_md": false 00:20:17.185 }, 00:20:17.185 "memory_domains": [ 00:20:17.185 { 00:20:17.185 "dma_device_id": "system", 00:20:17.185 "dma_device_type": 1 00:20:17.185 } 00:20:17.185 ], 00:20:17.185 "driver_specific": { 00:20:17.185 "nvme": [ 00:20:17.185 { 00:20:17.185 "trid": { 00:20:17.185 "trtype": "TCP", 00:20:17.185 "adrfam": "IPv4", 00:20:17.185 "traddr": "10.0.0.2", 00:20:17.185 "trsvcid": "4421", 00:20:17.185 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.185 }, 00:20:17.185 "ctrlr_data": { 00:20:17.185 "cntlid": 3, 00:20:17.185 "vendor_id": "0x8086", 00:20:17.185 "model_number": "SPDK bdev Controller", 00:20:17.185 "serial_number": "00000000000000000000", 00:20:17.185 "firmware_revision": "24.09", 00:20:17.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.185 "oacs": { 00:20:17.185 "security": 0, 00:20:17.186 "format": 0, 00:20:17.186 "firmware": 0, 00:20:17.186 "ns_manage": 0 00:20:17.186 }, 00:20:17.186 "multi_ctrlr": true, 00:20:17.186 "ana_reporting": false 00:20:17.186 }, 00:20:17.186 "vs": { 00:20:17.186 "nvme_version": "1.3" 00:20:17.186 }, 00:20:17.186 "ns_data": { 00:20:17.186 "id": 1, 00:20:17.186 "can_share": true 00:20:17.186 } 00:20:17.186 } 00:20:17.186 ], 00:20:17.186 "mp_policy": "active_passive" 00:20:17.186 } 00:20:17.186 } 00:20:17.186 ] 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7D0ujogy1J 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.186 rmmod nvme_tcp 00:20:17.186 rmmod nvme_fabrics 00:20:17.186 rmmod nvme_keyring 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1542889 ']' 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1542889 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1542889 ']' 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1542889 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542889 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542889' 00:20:17.186 killing process with pid 1542889 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1542889 00:20:17.186 [2024-07-25 10:27:06.915095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:17.186 [2024-07-25 10:27:06.915130] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.186 10:27:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1542889 00:20:17.444 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.445 10:27:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.980 00:20:19.980 real 0m5.145s 00:20:19.980 user 0m1.981s 00:20:19.980 sys 0m1.580s 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.980 ************************************ 00:20:19.980 END TEST nvmf_async_init 00:20:19.980 ************************************ 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.980 ************************************ 00:20:19.980 START TEST dma 00:20:19.980 ************************************ 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:19.980 * Looking for test storage... 00:20:19.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:19.980 00:20:19.980 real 0m0.069s 00:20:19.980 user 0m0.027s 00:20:19.980 sys 0m0.047s 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:19.980 ************************************ 00:20:19.980 END TEST dma 00:20:19.980 ************************************ 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.980 ************************************ 00:20:19.980 START TEST nvmf_identify 00:20:19.980 ************************************ 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:19.980 * Looking for test storage... 00:20:19.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.980 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.981 10:27:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:21.358 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.358 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:21.359 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:21.359 Found net devices under 0000:08:00.0: cvl_0_0 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:21.359 Found net devices under 0000:08:00.1: cvl_0_1 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.359 10:27:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:20:21.359 00:20:21.359 --- 10.0.0.2 ping statistics --- 00:20:21.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.359 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:21.359 00:20:21.359 --- 10.0.0.1 ping statistics --- 00:20:21.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.359 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1544980 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1544980 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1544980 ']' 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.359 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.359 [2024-07-25 10:27:11.127983] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:21.359 [2024-07-25 10:27:11.128072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.620 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.620 [2024-07-25 10:27:11.195628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.620 [2024-07-25 10:27:11.314623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.620 [2024-07-25 10:27:11.314686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.620 [2024-07-25 10:27:11.314702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.620 [2024-07-25 10:27:11.314716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.620 [2024-07-25 10:27:11.314728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.620 [2024-07-25 10:27:11.314830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.620 [2024-07-25 10:27:11.314951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.620 [2024-07-25 10:27:11.314999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.620 [2024-07-25 10:27:11.315003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 [2024-07-25 10:27:11.436692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 Malloc0 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 [2024-07-25 10:27:11.515093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 [ 00:20:21.882 { 00:20:21.882 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.882 "subtype": "Discovery", 00:20:21.882 "listen_addresses": [ 00:20:21.882 { 00:20:21.882 "trtype": "TCP", 00:20:21.882 "adrfam": "IPv4", 00:20:21.882 "traddr": "10.0.0.2", 00:20:21.882 "trsvcid": "4420" 00:20:21.882 } 00:20:21.882 ], 00:20:21.882 "allow_any_host": true, 00:20:21.882 "hosts": [] 00:20:21.882 }, 00:20:21.882 { 00:20:21.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.882 "subtype": "NVMe", 00:20:21.882 "listen_addresses": [ 00:20:21.882 { 00:20:21.882 "trtype": "TCP", 00:20:21.882 "adrfam": "IPv4", 00:20:21.882 "traddr": "10.0.0.2", 00:20:21.882 "trsvcid": "4420" 00:20:21.882 } 00:20:21.882 ], 00:20:21.882 "allow_any_host": true, 00:20:21.882 "hosts": [], 00:20:21.882 "serial_number": "SPDK00000000000001", 00:20:21.882 "model_number": "SPDK bdev Controller", 00:20:21.882 "max_namespaces": 32, 00:20:21.882 "min_cntlid": 1, 00:20:21.882 "max_cntlid": 65519, 00:20:21.882 "namespaces": [ 00:20:21.882 { 00:20:21.882 "nsid": 1, 00:20:21.882 "bdev_name": "Malloc0", 00:20:21.882 "name": "Malloc0", 00:20:21.882 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:21.882 "eui64": "ABCDEF0123456789", 00:20:21.882 "uuid": "66fedb02-6698-41ab-919b-fa8e74c1f511" 00:20:21.882 } 00:20:21.882 ] 00:20:21.882 } 00:20:21.882 ] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.882 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:21.882 [2024-07-25 10:27:11.558326] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:21.882 [2024-07-25 10:27:11.558386] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545090 ] 00:20:21.882 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.883 [2024-07-25 10:27:11.602487] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:21.883 [2024-07-25 10:27:11.602558] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:21.883 [2024-07-25 10:27:11.602570] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:21.883 [2024-07-25 10:27:11.602587] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:21.883 [2024-07-25 10:27:11.602602] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:21.883 [2024-07-25 10:27:11.602895] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:21.883 [2024-07-25 10:27:11.602950] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8e8400 0 00:20:21.883 [2024-07-25 10:27:11.609496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:21.883 [2024-07-25 10:27:11.609523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:21.883 [2024-07-25 10:27:11.609534] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:21.883 [2024-07-25 10:27:11.609541] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:21.883 [2024-07-25 10:27:11.609595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.609608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.609617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.609636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:21.883 [2024-07-25 10:27:11.609665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.617506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.617524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.617532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.617556] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:21.883 [2024-07-25 10:27:11.617568] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:21.883 [2024-07-25 10:27:11.617579] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:21.883 [2024-07-25 10:27:11.617605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.617635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.617660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.617819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.617835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.617842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.617865] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:21.883 [2024-07-25 10:27:11.617887] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:21.883 [2024-07-25 10:27:11.617901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.617917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.617929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.617952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.618062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.618078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.618085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.618103] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:21.883 [2024-07-25 10:27:11.618118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.618132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.618159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.618182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.618332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.618348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.618355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.618373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.618391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.618420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.618443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.618595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.618611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.618619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.618636] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:21.883 [2024-07-25 10:27:11.618646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.618666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.618778] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:21.883 [2024-07-25 10:27:11.618787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.618802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.618818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.618830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.618853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.619004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.619019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.619027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.619034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.619043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:21.883 [2024-07-25 10:27:11.619061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.619071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.619078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.619090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.619112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.619217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.883 [2024-07-25 10:27:11.619232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.883 [2024-07-25 10:27:11.619240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.619247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.883 [2024-07-25 10:27:11.619256] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:21.883 [2024-07-25 10:27:11.619265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:21.883 [2024-07-25 10:27:11.619280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:21.883 [2024-07-25 10:27:11.619295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:21.883 [2024-07-25 10:27:11.619311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.883 [2024-07-25 10:27:11.619320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.883 [2024-07-25 10:27:11.619332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.883 [2024-07-25 10:27:11.619354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.883 [2024-07-25 10:27:11.619521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:21.884 [2024-07-25 10:27:11.619537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:21.884 [2024-07-25 10:27:11.619549] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619557] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e8400): datao=0, datal=4096, cccid=0 00:20:21.884 [2024-07-25 10:27:11.619567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9483c0) on tqpair(0x8e8400): expected_datao=0, payload_size=4096 00:20:21.884 [2024-07-25 10:27:11.619576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619588] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619597] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.884 [2024-07-25 10:27:11.619656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.884 [2024-07-25 10:27:11.619663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.884 [2024-07-25 10:27:11.619684] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:21.884 [2024-07-25 10:27:11.619694] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:21.884 [2024-07-25 10:27:11.619703] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:21.884 [2024-07-25 10:27:11.619712] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:21.884 [2024-07-25 10:27:11.619721] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:21.884 [2024-07-25 10:27:11.619730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:21.884 [2024-07-25 10:27:11.619746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:21.884 [2024-07-25 10:27:11.619764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.619781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.619794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:21.884 [2024-07-25 10:27:11.619816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.884 [2024-07-25 10:27:11.619979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.884 [2024-07-25 10:27:11.619992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.884 [2024-07-25 10:27:11.619999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:21.884 [2024-07-25 10:27:11.620020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.884 [2024-07-25 10:27:11.620058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.884 [2024-07-25 10:27:11.620099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.884 [2024-07-25 10:27:11.620136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.884 [2024-07-25 10:27:11.620170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:21.884 [2024-07-25 10:27:11.620190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:21.884 [2024-07-25 10:27:11.620204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.884 [2024-07-25 10:27:11.620248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 0, qid 0 00:20:21.884 [2024-07-25 10:27:11.620260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948540, cid 1, qid 0 00:20:21.884 [2024-07-25 10:27:11.620269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9486c0, cid 2, qid 0 00:20:21.884 [2024-07-25 10:27:11.620277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948840, cid 3, qid 0 00:20:21.884 [2024-07-25 10:27:11.620286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9489c0, cid 4, qid 0 00:20:21.884 [2024-07-25 10:27:11.620493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:21.884 [2024-07-25 10:27:11.620509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:21.884 [2024-07-25 10:27:11.620516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9489c0) on tqpair=0x8e8400 00:20:21.884 [2024-07-25 10:27:11.620534] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:21.884 [2024-07-25 10:27:11.620544] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:21.884 [2024-07-25 10:27:11.620563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e8400) 00:20:21.884 [2024-07-25 10:27:11.620586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.884 [2024-07-25 10:27:11.620608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9489c0, cid 4, qid 0 00:20:21.884 [2024-07-25 10:27:11.620743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:21.884 [2024-07-25 10:27:11.620759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:21.884 [2024-07-25 10:27:11.620767] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620774] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e8400): datao=0, datal=4096, cccid=4 00:20:21.884 [2024-07-25 10:27:11.620787] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9489c0) on tqpair(0x8e8400): expected_datao=0, payload_size=4096 00:20:21.884 [2024-07-25 10:27:11.620796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620819] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:21.884 [2024-07-25 10:27:11.620828] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.149 [2024-07-25 10:27:11.665515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.149 [2024-07-25 10:27:11.665524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9489c0) on tqpair=0x8e8400 00:20:22.149 [2024-07-25 10:27:11.665554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:22.149 [2024-07-25 10:27:11.665597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e8400) 00:20:22.149 [2024-07-25 10:27:11.665622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.149 [2024-07-25 10:27:11.665637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e8400) 00:20:22.149 [2024-07-25 10:27:11.665663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.149 [2024-07-25 10:27:11.665694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9489c0, cid 4, qid 0 00:20:22.149 [2024-07-25 10:27:11.665707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948b40, cid 5, qid 0 00:20:22.149 [2024-07-25 10:27:11.665909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.149 [2024-07-25 10:27:11.665925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.149 [2024-07-25 10:27:11.665933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.149 [2024-07-25 10:27:11.665940] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e8400): datao=0, datal=1024, cccid=4 00:20:22.150 [2024-07-25 10:27:11.665949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9489c0) on tqpair(0x8e8400): expected_datao=0, payload_size=1024 00:20:22.150 [2024-07-25 10:27:11.665957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.665969] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.665977] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.665987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.150 [2024-07-25 10:27:11.665997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.150 [2024-07-25 10:27:11.666004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.666012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948b40) on tqpair=0x8e8400 00:20:22.150 [2024-07-25 10:27:11.706620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.150 [2024-07-25 10:27:11.706640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.150 [2024-07-25 10:27:11.706648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.706656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9489c0) on tqpair=0x8e8400 00:20:22.150 [2024-07-25 10:27:11.706676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.706686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e8400) 00:20:22.150 [2024-07-25 10:27:11.706699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.150 [2024-07-25 10:27:11.706735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9489c0, cid 4, qid 0 00:20:22.150 [2024-07-25 10:27:11.706913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.150 [2024-07-25 10:27:11.706929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.150 [2024-07-25 10:27:11.706937] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.706944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e8400): datao=0, datal=3072, cccid=4 00:20:22.150 [2024-07-25 10:27:11.706953] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9489c0) on tqpair(0x8e8400): expected_datao=0, payload_size=3072 00:20:22.150 [2024-07-25 10:27:11.706962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.706973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.706981] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.150 [2024-07-25 10:27:11.707039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.150 [2024-07-25 10:27:11.707047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9489c0) on tqpair=0x8e8400 00:20:22.150 [2024-07-25 10:27:11.707073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e8400) 00:20:22.150 [2024-07-25 10:27:11.707094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.150 [2024-07-25 10:27:11.707124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9489c0, cid 4, qid 0 00:20:22.150 [2024-07-25 10:27:11.707307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.150 [2024-07-25 10:27:11.707320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.150 [2024-07-25 10:27:11.707327] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707334] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e8400): datao=0, datal=8, cccid=4 00:20:22.150 [2024-07-25 10:27:11.707343] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9489c0) on tqpair(0x8e8400): expected_datao=0, payload_size=8 00:20:22.150 [2024-07-25 10:27:11.707351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707362] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.707370] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.747615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.150 [2024-07-25 10:27:11.747635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.150 [2024-07-25 10:27:11.747643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.150 [2024-07-25 10:27:11.747651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9489c0) on tqpair=0x8e8400 00:20:22.150 ===================================================== 00:20:22.150 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:22.150 ===================================================== 00:20:22.150 Controller Capabilities/Features 00:20:22.150 ================================ 00:20:22.150 Vendor ID: 0000 00:20:22.150 Subsystem Vendor ID: 0000 00:20:22.150 Serial Number: .................... 00:20:22.150 Model Number: ........................................ 00:20:22.150 Firmware Version: 24.09 00:20:22.150 Recommended Arb Burst: 0 00:20:22.150 IEEE OUI Identifier: 00 00 00 00:20:22.150 Multi-path I/O 00:20:22.150 May have multiple subsystem ports: No 00:20:22.150 May have multiple controllers: No 00:20:22.150 Associated with SR-IOV VF: No 00:20:22.150 Max Data Transfer Size: 131072 00:20:22.150 Max Number of Namespaces: 0 00:20:22.150 Max Number of I/O Queues: 1024 00:20:22.150 NVMe Specification Version (VS): 1.3 00:20:22.150 NVMe Specification Version (Identify): 1.3 00:20:22.150 Maximum Queue Entries: 128 00:20:22.150 Contiguous Queues Required: Yes 00:20:22.150 Arbitration Mechanisms Supported 00:20:22.150 Weighted Round Robin: Not Supported 00:20:22.150 Vendor Specific: Not Supported 00:20:22.150 Reset Timeout: 15000 ms 00:20:22.150 Doorbell Stride: 4 bytes 00:20:22.150 NVM Subsystem Reset: Not Supported 00:20:22.150 Command Sets Supported 00:20:22.150 NVM Command Set: Supported 00:20:22.150 Boot Partition: Not Supported 00:20:22.150 Memory Page Size Minimum: 4096 bytes 00:20:22.150 Memory Page Size Maximum: 4096 bytes 00:20:22.150 Persistent Memory Region: Not Supported 00:20:22.150 Optional Asynchronous Events Supported 00:20:22.150 Namespace Attribute Notices: Not Supported 00:20:22.150 Firmware Activation Notices: Not Supported 00:20:22.150 ANA Change Notices: Not Supported 00:20:22.150 PLE Aggregate Log Change Notices: Not Supported 00:20:22.150 LBA Status Info Alert Notices: Not Supported 00:20:22.150 EGE Aggregate Log Change Notices: Not Supported 00:20:22.150 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.150 Zone Descriptor Change Notices: Not Supported 00:20:22.150 Discovery Log Change Notices: Supported 00:20:22.150 Controller Attributes 00:20:22.150 128-bit Host Identifier: Not Supported 00:20:22.150 Non-Operational Permissive Mode: Not Supported 00:20:22.150 NVM Sets: Not Supported 00:20:22.150 Read Recovery Levels: Not Supported 00:20:22.150 Endurance Groups: Not Supported 00:20:22.150 Predictable Latency Mode: Not Supported 00:20:22.150 Traffic Based Keep ALive: Not Supported 00:20:22.150 Namespace Granularity: Not Supported 00:20:22.150 SQ Associations: Not Supported 00:20:22.150 UUID List: Not Supported 00:20:22.150 Multi-Domain Subsystem: Not Supported 00:20:22.150 Fixed Capacity Management: Not Supported 00:20:22.150 Variable Capacity Management: Not Supported 00:20:22.150 Delete Endurance Group: Not Supported 00:20:22.150 Delete NVM Set: Not Supported 00:20:22.150 Extended LBA Formats Supported: Not Supported 00:20:22.150 Flexible Data Placement Supported: Not Supported 00:20:22.150 00:20:22.150 Controller Memory Buffer Support 00:20:22.150 ================================ 00:20:22.150 Supported: No 00:20:22.150 00:20:22.150 Persistent Memory Region Support 00:20:22.150 ================================ 00:20:22.150 Supported: No 00:20:22.150 00:20:22.150 Admin Command Set Attributes 00:20:22.150 ============================ 00:20:22.150 Security Send/Receive: Not Supported 00:20:22.150 Format NVM: Not Supported 00:20:22.150 Firmware Activate/Download: Not Supported 00:20:22.150 Namespace Management: Not Supported 00:20:22.150 Device Self-Test: Not Supported 00:20:22.150 Directives: Not Supported 00:20:22.150 NVMe-MI: Not Supported 00:20:22.150 Virtualization Management: Not Supported 00:20:22.150 Doorbell Buffer Config: Not Supported 00:20:22.150 Get LBA Status Capability: Not Supported 00:20:22.150 Command & Feature Lockdown Capability: Not Supported 00:20:22.150 Abort Command Limit: 1 00:20:22.150 Async Event Request Limit: 4 00:20:22.150 Number of Firmware Slots: N/A 00:20:22.150 Firmware Slot 1 Read-Only: N/A 00:20:22.150 Firmware Activation Without Reset: N/A 00:20:22.150 Multiple Update Detection Support: N/A 00:20:22.150 Firmware Update Granularity: No Information Provided 00:20:22.150 Per-Namespace SMART Log: No 00:20:22.150 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.150 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:22.150 Command Effects Log Page: Not Supported 00:20:22.150 Get Log Page Extended Data: Supported 00:20:22.150 Telemetry Log Pages: Not Supported 00:20:22.150 Persistent Event Log Pages: Not Supported 00:20:22.150 Supported Log Pages Log Page: May Support 00:20:22.150 Commands Supported & Effects Log Page: Not Supported 00:20:22.150 Feature Identifiers & Effects Log Page:May Support 00:20:22.150 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.150 Data Area 4 for Telemetry Log: Not Supported 00:20:22.150 Error Log Page Entries Supported: 128 00:20:22.150 Keep Alive: Not Supported 00:20:22.150 00:20:22.150 NVM Command Set Attributes 00:20:22.151 ========================== 00:20:22.151 Submission Queue Entry Size 00:20:22.151 Max: 1 00:20:22.151 Min: 1 00:20:22.151 Completion Queue Entry Size 00:20:22.151 Max: 1 00:20:22.151 Min: 1 00:20:22.151 Number of Namespaces: 0 00:20:22.151 Compare Command: Not Supported 00:20:22.151 Write Uncorrectable Command: Not Supported 00:20:22.151 Dataset Management Command: Not Supported 00:20:22.151 Write Zeroes Command: Not Supported 00:20:22.151 Set Features Save Field: Not Supported 00:20:22.151 Reservations: Not Supported 00:20:22.151 Timestamp: Not Supported 00:20:22.151 Copy: Not Supported 00:20:22.151 Volatile Write Cache: Not Present 00:20:22.151 Atomic Write Unit (Normal): 1 00:20:22.151 Atomic Write Unit (PFail): 1 00:20:22.151 Atomic Compare & Write Unit: 1 00:20:22.151 Fused Compare & Write: Supported 00:20:22.151 Scatter-Gather List 00:20:22.151 SGL Command Set: Supported 00:20:22.151 SGL Keyed: Supported 00:20:22.151 SGL Bit Bucket Descriptor: Not Supported 00:20:22.151 SGL Metadata Pointer: Not Supported 00:20:22.151 Oversized SGL: Not Supported 00:20:22.151 SGL Metadata Address: Not Supported 00:20:22.151 SGL Offset: Supported 00:20:22.151 Transport SGL Data Block: Not Supported 00:20:22.151 Replay Protected Memory Block: Not Supported 00:20:22.151 00:20:22.151 Firmware Slot Information 00:20:22.151 ========================= 00:20:22.151 Active slot: 0 00:20:22.151 00:20:22.151 00:20:22.151 Error Log 00:20:22.151 ========= 00:20:22.151 00:20:22.151 Active Namespaces 00:20:22.151 ================= 00:20:22.151 Discovery Log Page 00:20:22.151 ================== 00:20:22.151 Generation Counter: 2 00:20:22.151 Number of Records: 2 00:20:22.151 Record Format: 0 00:20:22.151 00:20:22.151 Discovery Log Entry 0 00:20:22.151 ---------------------- 00:20:22.151 Transport Type: 3 (TCP) 00:20:22.151 Address Family: 1 (IPv4) 00:20:22.151 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:22.151 Entry Flags: 00:20:22.151 Duplicate Returned Information: 1 00:20:22.151 Explicit Persistent Connection Support for Discovery: 1 00:20:22.151 Transport Requirements: 00:20:22.151 Secure Channel: Not Required 00:20:22.151 Port ID: 0 (0x0000) 00:20:22.151 Controller ID: 65535 (0xffff) 00:20:22.151 Admin Max SQ Size: 128 00:20:22.151 Transport Service Identifier: 4420 00:20:22.151 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:22.151 Transport Address: 10.0.0.2 00:20:22.151 Discovery Log Entry 1 00:20:22.151 ---------------------- 00:20:22.151 Transport Type: 3 (TCP) 00:20:22.151 Address Family: 1 (IPv4) 00:20:22.151 Subsystem Type: 2 (NVM Subsystem) 00:20:22.151 Entry Flags: 00:20:22.151 Duplicate Returned Information: 0 00:20:22.151 Explicit Persistent Connection Support for Discovery: 0 00:20:22.151 Transport Requirements: 00:20:22.151 Secure Channel: Not Required 00:20:22.151 Port ID: 0 (0x0000) 00:20:22.151 Controller ID: 65535 (0xffff) 00:20:22.151 Admin Max SQ Size: 128 00:20:22.151 Transport Service Identifier: 4420 00:20:22.151 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:22.151 Transport Address: 10.0.0.2 [2024-07-25 10:27:11.747782] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:22.151 [2024-07-25 10:27:11.747804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.747817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.151 [2024-07-25 10:27:11.747827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948540) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.747842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.151 [2024-07-25 10:27:11.747852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9486c0) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.747864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.151 [2024-07-25 10:27:11.747874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948840) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.747883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.151 [2024-07-25 10:27:11.747903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.747913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.747920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e8400) 00:20:22.151 [2024-07-25 10:27:11.747933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.151 [2024-07-25 10:27:11.747960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948840, cid 3, qid 0 00:20:22.151 [2024-07-25 10:27:11.748123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.151 [2024-07-25 10:27:11.748136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.151 [2024-07-25 10:27:11.748143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948840) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.748164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e8400) 00:20:22.151 [2024-07-25 10:27:11.748192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.151 [2024-07-25 10:27:11.748220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948840, cid 3, qid 0 00:20:22.151 [2024-07-25 10:27:11.748357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.151 [2024-07-25 10:27:11.748373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.151 [2024-07-25 10:27:11.748380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948840) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.748398] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:22.151 [2024-07-25 10:27:11.748407] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:22.151 [2024-07-25 10:27:11.748424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.748441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e8400) 00:20:22.151 [2024-07-25 10:27:11.748453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.151 [2024-07-25 10:27:11.748475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948840, cid 3, qid 0 00:20:22.151 [2024-07-25 10:27:11.752503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.151 [2024-07-25 10:27:11.752516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.151 [2024-07-25 10:27:11.752524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.752531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948840) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.752551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.752562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.752569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e8400) 00:20:22.151 [2024-07-25 10:27:11.752581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.151 [2024-07-25 10:27:11.752609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948840, cid 3, qid 0 00:20:22.151 [2024-07-25 10:27:11.752766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.151 [2024-07-25 10:27:11.752781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.151 [2024-07-25 10:27:11.752789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.151 [2024-07-25 10:27:11.752797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948840) on tqpair=0x8e8400 00:20:22.151 [2024-07-25 10:27:11.752811] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:22.151 00:20:22.151 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:22.151 [2024-07-25 10:27:11.789937] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:22.151 [2024-07-25 10:27:11.789988] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545092 ] 00:20:22.151 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.151 [2024-07-25 10:27:11.832010] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:22.151 [2024-07-25 10:27:11.832070] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:22.151 [2024-07-25 10:27:11.832082] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:22.151 [2024-07-25 10:27:11.832098] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:22.151 [2024-07-25 10:27:11.832112] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:22.152 [2024-07-25 10:27:11.832313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:22.152 [2024-07-25 10:27:11.832356] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20b4400 0 00:20:22.152 [2024-07-25 10:27:11.838492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:22.152 [2024-07-25 10:27:11.838516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:22.152 [2024-07-25 10:27:11.838526] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:22.152 [2024-07-25 10:27:11.838533] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:22.152 [2024-07-25 10:27:11.838575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.838588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.838596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.838613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:22.152 [2024-07-25 10:27:11.838641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.846493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.846512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.846521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.846545] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:22.152 [2024-07-25 10:27:11.846563] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:22.152 [2024-07-25 10:27:11.846574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:22.152 [2024-07-25 10:27:11.846595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.846626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.846651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.846769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.846783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.846790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.846812] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:22.152 [2024-07-25 10:27:11.846827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:22.152 [2024-07-25 10:27:11.846841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.846857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.846869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.846892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.847011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.847027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.847035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.847053] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:22.152 [2024-07-25 10:27:11.847068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.847110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.847133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.847249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.847265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.847273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.847291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.847344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.847367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.847472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.847495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.847503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.847520] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:22.152 [2024-07-25 10:27:11.847530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847656] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:22.152 [2024-07-25 10:27:11.847665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.847709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.847732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.847848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.847864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.847872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.847889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.152 [2024-07-25 10:27:11.847907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.847924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.847937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.847959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.848058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.152 [2024-07-25 10:27:11.848074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.152 [2024-07-25 10:27:11.848082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.848090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.152 [2024-07-25 10:27:11.848098] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.152 [2024-07-25 10:27:11.848112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:22.152 [2024-07-25 10:27:11.848128] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:22.152 [2024-07-25 10:27:11.848146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.152 [2024-07-25 10:27:11.848162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.848171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.152 [2024-07-25 10:27:11.848183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.152 [2024-07-25 10:27:11.848206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.152 [2024-07-25 10:27:11.848354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.152 [2024-07-25 10:27:11.848371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.152 [2024-07-25 10:27:11.848379] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.848387] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=4096, cccid=0 00:20:22.152 [2024-07-25 10:27:11.848396] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21143c0) on tqpair(0x20b4400): expected_datao=0, payload_size=4096 00:20:22.152 [2024-07-25 10:27:11.848404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.848424] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.152 [2024-07-25 10:27:11.848434] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.153 [2024-07-25 10:27:11.888597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.153 [2024-07-25 10:27:11.888606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.153 [2024-07-25 10:27:11.888626] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:22.153 [2024-07-25 10:27:11.888636] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:22.153 [2024-07-25 10:27:11.888645] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:22.153 [2024-07-25 10:27:11.888653] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:22.153 [2024-07-25 10:27:11.888662] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:22.153 [2024-07-25 10:27:11.888671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.888687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.888705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.888735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:22.153 [2024-07-25 10:27:11.888760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.153 [2024-07-25 10:27:11.888870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.153 [2024-07-25 10:27:11.888883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.153 [2024-07-25 10:27:11.888895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.153 [2024-07-25 10:27:11.888916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.888943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.153 [2024-07-25 10:27:11.888954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.888980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.153 [2024-07-25 10:27:11.888991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.888999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.889017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.153 [2024-07-25 10:27:11.889028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.889053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.153 [2024-07-25 10:27:11.889063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.889083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.889097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.889117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.153 [2024-07-25 10:27:11.889141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21143c0, cid 0, qid 0 00:20:22.153 [2024-07-25 10:27:11.889153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114540, cid 1, qid 0 00:20:22.153 [2024-07-25 10:27:11.889162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21146c0, cid 2, qid 0 00:20:22.153 [2024-07-25 10:27:11.889171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114840, cid 3, qid 0 00:20:22.153 [2024-07-25 10:27:11.889180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.153 [2024-07-25 10:27:11.889316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.153 [2024-07-25 10:27:11.889329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.153 [2024-07-25 10:27:11.889336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.153 [2024-07-25 10:27:11.889354] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:22.153 [2024-07-25 10:27:11.889368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.889388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.889401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.889413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.889428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.889441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:22.153 [2024-07-25 10:27:11.889463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.153 [2024-07-25 10:27:11.893502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.153 [2024-07-25 10:27:11.893520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.153 [2024-07-25 10:27:11.893528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.153 [2024-07-25 10:27:11.893613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.893635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.893651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.893672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.153 [2024-07-25 10:27:11.893696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.153 [2024-07-25 10:27:11.893822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.153 [2024-07-25 10:27:11.893838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.153 [2024-07-25 10:27:11.893846] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893854] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=4096, cccid=4 00:20:22.153 [2024-07-25 10:27:11.893863] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21149c0) on tqpair(0x20b4400): expected_datao=0, payload_size=4096 00:20:22.153 [2024-07-25 10:27:11.893872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893884] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.153 [2024-07-25 10:27:11.893917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.153 [2024-07-25 10:27:11.893924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.893932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.153 [2024-07-25 10:27:11.893949] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:22.153 [2024-07-25 10:27:11.893967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.893986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:22.153 [2024-07-25 10:27:11.894005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.894015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.153 [2024-07-25 10:27:11.894027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.153 [2024-07-25 10:27:11.894050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.153 [2024-07-25 10:27:11.894191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.153 [2024-07-25 10:27:11.894207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.153 [2024-07-25 10:27:11.894214] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.153 [2024-07-25 10:27:11.894222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=4096, cccid=4 00:20:22.153 [2024-07-25 10:27:11.894230] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21149c0) on tqpair(0x20b4400): expected_datao=0, payload_size=4096 00:20:22.153 [2024-07-25 10:27:11.894239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894251] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894259] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.894284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.894291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.894323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.894381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.154 [2024-07-25 10:27:11.894403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.154 [2024-07-25 10:27:11.894532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.154 [2024-07-25 10:27:11.894547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.154 [2024-07-25 10:27:11.894555] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894562] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=4096, cccid=4 00:20:22.154 [2024-07-25 10:27:11.894571] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21149c0) on tqpair(0x20b4400): expected_datao=0, payload_size=4096 00:20:22.154 [2024-07-25 10:27:11.894580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894591] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894599] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.894623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.894631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.894655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894734] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:22.154 [2024-07-25 10:27:11.894743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:22.154 [2024-07-25 10:27:11.894753] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:22.154 [2024-07-25 10:27:11.894775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.894797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.154 [2024-07-25 10:27:11.894809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.894825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.894836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.154 [2024-07-25 10:27:11.894862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.154 [2024-07-25 10:27:11.894875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114b40, cid 5, qid 0 00:20:22.154 [2024-07-25 10:27:11.894989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.895002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.895010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.895030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.895040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.895048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114b40) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.895073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.895094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.154 [2024-07-25 10:27:11.895116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114b40, cid 5, qid 0 00:20:22.154 [2024-07-25 10:27:11.895224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.895237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.895245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114b40) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.895274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.895296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.154 [2024-07-25 10:27:11.895318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114b40, cid 5, qid 0 00:20:22.154 [2024-07-25 10:27:11.895415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.154 [2024-07-25 10:27:11.895428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.154 [2024-07-25 10:27:11.895436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114b40) on tqpair=0x20b4400 00:20:22.154 [2024-07-25 10:27:11.895461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.154 [2024-07-25 10:27:11.895470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b4400) 00:20:22.154 [2024-07-25 10:27:11.895489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.154 [2024-07-25 10:27:11.895513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114b40, cid 5, qid 0 00:20:22.154 [2024-07-25 10:27:11.895614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.155 [2024-07-25 10:27:11.895627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.155 [2024-07-25 10:27:11.895634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.895642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114b40) on tqpair=0x20b4400 00:20:22.155 [2024-07-25 10:27:11.895667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.895679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b4400) 00:20:22.155 [2024-07-25 10:27:11.895691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.155 [2024-07-25 10:27:11.895705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.895713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b4400) 00:20:22.155 [2024-07-25 10:27:11.895725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.155 [2024-07-25 10:27:11.895738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.895746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20b4400) 00:20:22.155 [2024-07-25 10:27:11.895757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.155 [2024-07-25 10:27:11.895771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.895779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20b4400) 00:20:22.155 [2024-07-25 10:27:11.895790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.155 [2024-07-25 10:27:11.895814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114b40, cid 5, qid 0 00:20:22.155 [2024-07-25 10:27:11.895826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21149c0, cid 4, qid 0 00:20:22.155 [2024-07-25 10:27:11.895835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114cc0, cid 6, qid 0 00:20:22.155 [2024-07-25 10:27:11.895844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114e40, cid 7, qid 0 00:20:22.155 [2024-07-25 10:27:11.896038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.155 [2024-07-25 10:27:11.896052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.155 [2024-07-25 10:27:11.896059] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896067] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=8192, cccid=5 00:20:22.155 [2024-07-25 10:27:11.896076] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2114b40) on tqpair(0x20b4400): expected_datao=0, payload_size=8192 00:20:22.155 [2024-07-25 10:27:11.896084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896104] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896114] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.155 [2024-07-25 10:27:11.896139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.155 [2024-07-25 10:27:11.896146] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896154] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=512, cccid=4 00:20:22.155 [2024-07-25 10:27:11.896163] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21149c0) on tqpair(0x20b4400): expected_datao=0, payload_size=512 00:20:22.155 [2024-07-25 10:27:11.896171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.155 [2024-07-25 10:27:11.896210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.155 [2024-07-25 10:27:11.896218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=512, cccid=6 00:20:22.155 [2024-07-25 10:27:11.896234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2114cc0) on tqpair(0x20b4400): expected_datao=0, payload_size=512 00:20:22.155 [2024-07-25 10:27:11.896242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896253] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896261] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.155 [2024-07-25 10:27:11.896281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.155 [2024-07-25 10:27:11.896288] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b4400): datao=0, datal=4096, cccid=7 00:20:22.155 [2024-07-25 10:27:11.896304] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2114e40) on tqpair(0x20b4400): expected_datao=0, payload_size=4096 00:20:22.155 [2024-07-25 10:27:11.896313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896324] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896332] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.155 [2024-07-25 10:27:11.896356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.155 [2024-07-25 10:27:11.896363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114b40) on tqpair=0x20b4400 00:20:22.155 [2024-07-25 10:27:11.896391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.155 [2024-07-25 10:27:11.896404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.155 [2024-07-25 10:27:11.896411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21149c0) on tqpair=0x20b4400 00:20:22.155 [2024-07-25 10:27:11.896439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.155 [2024-07-25 10:27:11.896451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.155 [2024-07-25 10:27:11.896458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114cc0) on tqpair=0x20b4400 00:20:22.155 [2024-07-25 10:27:11.896485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.155 [2024-07-25 10:27:11.896498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.155 [2024-07-25 10:27:11.896505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.155 [2024-07-25 10:27:11.896513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114e40) on tqpair=0x20b4400 00:20:22.155 ===================================================== 00:20:22.155 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.155 ===================================================== 00:20:22.155 Controller Capabilities/Features 00:20:22.155 ================================ 00:20:22.155 Vendor ID: 8086 00:20:22.155 Subsystem Vendor ID: 8086 00:20:22.155 Serial Number: SPDK00000000000001 00:20:22.155 Model Number: SPDK bdev Controller 00:20:22.155 Firmware Version: 24.09 00:20:22.155 Recommended Arb Burst: 6 00:20:22.155 IEEE OUI Identifier: e4 d2 5c 00:20:22.155 Multi-path I/O 00:20:22.155 May have multiple subsystem ports: Yes 00:20:22.155 May have multiple controllers: Yes 00:20:22.155 Associated with SR-IOV VF: No 00:20:22.155 Max Data Transfer Size: 131072 00:20:22.155 Max Number of Namespaces: 32 00:20:22.155 Max Number of I/O Queues: 127 00:20:22.155 NVMe Specification Version (VS): 1.3 00:20:22.155 NVMe Specification Version (Identify): 1.3 00:20:22.155 Maximum Queue Entries: 128 00:20:22.155 Contiguous Queues Required: Yes 00:20:22.155 Arbitration Mechanisms Supported 00:20:22.155 Weighted Round Robin: Not Supported 00:20:22.155 Vendor Specific: Not Supported 00:20:22.155 Reset Timeout: 15000 ms 00:20:22.155 Doorbell Stride: 4 bytes 00:20:22.155 NVM Subsystem Reset: Not Supported 00:20:22.155 Command Sets Supported 00:20:22.155 NVM Command Set: Supported 00:20:22.155 Boot Partition: Not Supported 00:20:22.155 Memory Page Size Minimum: 4096 bytes 00:20:22.155 Memory Page Size Maximum: 4096 bytes 00:20:22.155 Persistent Memory Region: Not Supported 00:20:22.155 Optional Asynchronous Events Supported 00:20:22.155 Namespace Attribute Notices: Supported 00:20:22.155 Firmware Activation Notices: Not Supported 00:20:22.155 ANA Change Notices: Not Supported 00:20:22.155 PLE Aggregate Log Change Notices: Not Supported 00:20:22.155 LBA Status Info Alert Notices: Not Supported 00:20:22.155 EGE Aggregate Log Change Notices: Not Supported 00:20:22.155 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.155 Zone Descriptor Change Notices: Not Supported 00:20:22.155 Discovery Log Change Notices: Not Supported 00:20:22.155 Controller Attributes 00:20:22.155 128-bit Host Identifier: Supported 00:20:22.155 Non-Operational Permissive Mode: Not Supported 00:20:22.155 NVM Sets: Not Supported 00:20:22.155 Read Recovery Levels: Not Supported 00:20:22.155 Endurance Groups: Not Supported 00:20:22.155 Predictable Latency Mode: Not Supported 00:20:22.155 Traffic Based Keep ALive: Not Supported 00:20:22.155 Namespace Granularity: Not Supported 00:20:22.155 SQ Associations: Not Supported 00:20:22.155 UUID List: Not Supported 00:20:22.155 Multi-Domain Subsystem: Not Supported 00:20:22.155 Fixed Capacity Management: Not Supported 00:20:22.155 Variable Capacity Management: Not Supported 00:20:22.155 Delete Endurance Group: Not Supported 00:20:22.155 Delete NVM Set: Not Supported 00:20:22.155 Extended LBA Formats Supported: Not Supported 00:20:22.155 Flexible Data Placement Supported: Not Supported 00:20:22.155 00:20:22.155 Controller Memory Buffer Support 00:20:22.155 ================================ 00:20:22.155 Supported: No 00:20:22.156 00:20:22.156 Persistent Memory Region Support 00:20:22.156 ================================ 00:20:22.156 Supported: No 00:20:22.156 00:20:22.156 Admin Command Set Attributes 00:20:22.156 ============================ 00:20:22.156 Security Send/Receive: Not Supported 00:20:22.156 Format NVM: Not Supported 00:20:22.156 Firmware Activate/Download: Not Supported 00:20:22.156 Namespace Management: Not Supported 00:20:22.156 Device Self-Test: Not Supported 00:20:22.156 Directives: Not Supported 00:20:22.156 NVMe-MI: Not Supported 00:20:22.156 Virtualization Management: Not Supported 00:20:22.156 Doorbell Buffer Config: Not Supported 00:20:22.156 Get LBA Status Capability: Not Supported 00:20:22.156 Command & Feature Lockdown Capability: Not Supported 00:20:22.156 Abort Command Limit: 4 00:20:22.156 Async Event Request Limit: 4 00:20:22.156 Number of Firmware Slots: N/A 00:20:22.156 Firmware Slot 1 Read-Only: N/A 00:20:22.156 Firmware Activation Without Reset: N/A 00:20:22.156 Multiple Update Detection Support: N/A 00:20:22.156 Firmware Update Granularity: No Information Provided 00:20:22.156 Per-Namespace SMART Log: No 00:20:22.156 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.156 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:22.156 Command Effects Log Page: Supported 00:20:22.156 Get Log Page Extended Data: Supported 00:20:22.156 Telemetry Log Pages: Not Supported 00:20:22.156 Persistent Event Log Pages: Not Supported 00:20:22.156 Supported Log Pages Log Page: May Support 00:20:22.156 Commands Supported & Effects Log Page: Not Supported 00:20:22.156 Feature Identifiers & Effects Log Page:May Support 00:20:22.156 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.156 Data Area 4 for Telemetry Log: Not Supported 00:20:22.156 Error Log Page Entries Supported: 128 00:20:22.156 Keep Alive: Supported 00:20:22.156 Keep Alive Granularity: 10000 ms 00:20:22.156 00:20:22.156 NVM Command Set Attributes 00:20:22.156 ========================== 00:20:22.156 Submission Queue Entry Size 00:20:22.156 Max: 64 00:20:22.156 Min: 64 00:20:22.156 Completion Queue Entry Size 00:20:22.156 Max: 16 00:20:22.156 Min: 16 00:20:22.156 Number of Namespaces: 32 00:20:22.156 Compare Command: Supported 00:20:22.156 Write Uncorrectable Command: Not Supported 00:20:22.156 Dataset Management Command: Supported 00:20:22.156 Write Zeroes Command: Supported 00:20:22.156 Set Features Save Field: Not Supported 00:20:22.156 Reservations: Supported 00:20:22.156 Timestamp: Not Supported 00:20:22.156 Copy: Supported 00:20:22.156 Volatile Write Cache: Present 00:20:22.156 Atomic Write Unit (Normal): 1 00:20:22.156 Atomic Write Unit (PFail): 1 00:20:22.156 Atomic Compare & Write Unit: 1 00:20:22.156 Fused Compare & Write: Supported 00:20:22.156 Scatter-Gather List 00:20:22.156 SGL Command Set: Supported 00:20:22.156 SGL Keyed: Supported 00:20:22.156 SGL Bit Bucket Descriptor: Not Supported 00:20:22.156 SGL Metadata Pointer: Not Supported 00:20:22.156 Oversized SGL: Not Supported 00:20:22.156 SGL Metadata Address: Not Supported 00:20:22.156 SGL Offset: Supported 00:20:22.156 Transport SGL Data Block: Not Supported 00:20:22.156 Replay Protected Memory Block: Not Supported 00:20:22.156 00:20:22.156 Firmware Slot Information 00:20:22.156 ========================= 00:20:22.156 Active slot: 1 00:20:22.156 Slot 1 Firmware Revision: 24.09 00:20:22.156 00:20:22.156 00:20:22.156 Commands Supported and Effects 00:20:22.156 ============================== 00:20:22.156 Admin Commands 00:20:22.156 -------------- 00:20:22.156 Get Log Page (02h): Supported 00:20:22.156 Identify (06h): Supported 00:20:22.156 Abort (08h): Supported 00:20:22.156 Set Features (09h): Supported 00:20:22.156 Get Features (0Ah): Supported 00:20:22.156 Asynchronous Event Request (0Ch): Supported 00:20:22.156 Keep Alive (18h): Supported 00:20:22.156 I/O Commands 00:20:22.156 ------------ 00:20:22.156 Flush (00h): Supported LBA-Change 00:20:22.156 Write (01h): Supported LBA-Change 00:20:22.156 Read (02h): Supported 00:20:22.156 Compare (05h): Supported 00:20:22.156 Write Zeroes (08h): Supported LBA-Change 00:20:22.156 Dataset Management (09h): Supported LBA-Change 00:20:22.156 Copy (19h): Supported LBA-Change 00:20:22.156 00:20:22.156 Error Log 00:20:22.156 ========= 00:20:22.156 00:20:22.156 Arbitration 00:20:22.156 =========== 00:20:22.156 Arbitration Burst: 1 00:20:22.156 00:20:22.156 Power Management 00:20:22.156 ================ 00:20:22.156 Number of Power States: 1 00:20:22.156 Current Power State: Power State #0 00:20:22.156 Power State #0: 00:20:22.156 Max Power: 0.00 W 00:20:22.156 Non-Operational State: Operational 00:20:22.156 Entry Latency: Not Reported 00:20:22.156 Exit Latency: Not Reported 00:20:22.156 Relative Read Throughput: 0 00:20:22.156 Relative Read Latency: 0 00:20:22.156 Relative Write Throughput: 0 00:20:22.156 Relative Write Latency: 0 00:20:22.156 Idle Power: Not Reported 00:20:22.156 Active Power: Not Reported 00:20:22.156 Non-Operational Permissive Mode: Not Supported 00:20:22.156 00:20:22.156 Health Information 00:20:22.156 ================== 00:20:22.156 Critical Warnings: 00:20:22.156 Available Spare Space: OK 00:20:22.156 Temperature: OK 00:20:22.156 Device Reliability: OK 00:20:22.156 Read Only: No 00:20:22.156 Volatile Memory Backup: OK 00:20:22.156 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:22.156 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:22.156 Available Spare: 0% 00:20:22.156 Available Spare Threshold: 0% 00:20:22.156 Life Percentage Used:[2024-07-25 10:27:11.896649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.156 [2024-07-25 10:27:11.896662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20b4400) 00:20:22.156 [2024-07-25 10:27:11.896675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.156 [2024-07-25 10:27:11.896699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114e40, cid 7, qid 0 00:20:22.156 [2024-07-25 10:27:11.896821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.156 [2024-07-25 10:27:11.896837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.156 [2024-07-25 10:27:11.896845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.156 [2024-07-25 10:27:11.896853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114e40) on tqpair=0x20b4400 00:20:22.156 [2024-07-25 10:27:11.896903] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:22.156 [2024-07-25 10:27:11.896924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21143c0) on tqpair=0x20b4400 00:20:22.156 [2024-07-25 10:27:11.896935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.156 [2024-07-25 10:27:11.896945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114540) on tqpair=0x20b4400 00:20:22.156 [2024-07-25 10:27:11.896954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.156 [2024-07-25 10:27:11.896964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21146c0) on tqpair=0x20b4400 00:20:22.156 [2024-07-25 10:27:11.896973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.156 [2024-07-25 10:27:11.896983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114840) on tqpair=0x20b4400 00:20:22.156 [2024-07-25 10:27:11.896991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.156 [2024-07-25 10:27:11.897006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.156 [2024-07-25 10:27:11.897015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.156 [2024-07-25 10:27:11.897022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b4400) 00:20:22.156 [2024-07-25 10:27:11.897034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.156 [2024-07-25 10:27:11.897058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114840, cid 3, qid 0 00:20:22.156 [2024-07-25 10:27:11.897161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.156 [2024-07-25 10:27:11.897177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.156 [2024-07-25 10:27:11.897185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.156 [2024-07-25 10:27:11.897193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114840) on tqpair=0x20b4400 00:20:22.157 [2024-07-25 10:27:11.897211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.897220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.897228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b4400) 00:20:22.157 [2024-07-25 10:27:11.897240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.157 [2024-07-25 10:27:11.897267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114840, cid 3, qid 0 00:20:22.157 [2024-07-25 10:27:11.897383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.157 [2024-07-25 10:27:11.897395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.157 [2024-07-25 10:27:11.897403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.897411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114840) on tqpair=0x20b4400 00:20:22.157 [2024-07-25 10:27:11.897420] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:22.157 [2024-07-25 10:27:11.897429] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:22.157 [2024-07-25 10:27:11.897446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.897455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.897463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b4400) 00:20:22.157 [2024-07-25 10:27:11.897475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.157 [2024-07-25 10:27:11.901514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2114840, cid 3, qid 0 00:20:22.157 [2024-07-25 10:27:11.901631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.157 [2024-07-25 10:27:11.901645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.157 [2024-07-25 10:27:11.901652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.157 [2024-07-25 10:27:11.901660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2114840) on tqpair=0x20b4400 00:20:22.157 [2024-07-25 10:27:11.901675] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:22.157 0% 00:20:22.157 Data Units Read: 0 00:20:22.157 Data Units Written: 0 00:20:22.157 Host Read Commands: 0 00:20:22.157 Host Write Commands: 0 00:20:22.157 Controller Busy Time: 0 minutes 00:20:22.157 Power Cycles: 0 00:20:22.157 Power On Hours: 0 hours 00:20:22.157 Unsafe Shutdowns: 0 00:20:22.157 Unrecoverable Media Errors: 0 00:20:22.157 Lifetime Error Log Entries: 0 00:20:22.157 Warning Temperature Time: 0 minutes 00:20:22.157 Critical Temperature Time: 0 minutes 00:20:22.157 00:20:22.157 Number of Queues 00:20:22.157 ================ 00:20:22.157 Number of I/O Submission Queues: 127 00:20:22.157 Number of I/O Completion Queues: 127 00:20:22.157 00:20:22.157 Active Namespaces 00:20:22.157 ================= 00:20:22.157 Namespace ID:1 00:20:22.157 Error Recovery Timeout: Unlimited 00:20:22.157 Command Set Identifier: NVM (00h) 00:20:22.157 Deallocate: Supported 00:20:22.157 Deallocated/Unwritten Error: Not Supported 00:20:22.157 Deallocated Read Value: Unknown 00:20:22.157 Deallocate in Write Zeroes: Not Supported 00:20:22.157 Deallocated Guard Field: 0xFFFF 00:20:22.157 Flush: Supported 00:20:22.157 Reservation: Supported 00:20:22.157 Namespace Sharing Capabilities: Multiple Controllers 00:20:22.157 Size (in LBAs): 131072 (0GiB) 00:20:22.157 Capacity (in LBAs): 131072 (0GiB) 00:20:22.157 Utilization (in LBAs): 131072 (0GiB) 00:20:22.157 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:22.157 EUI64: ABCDEF0123456789 00:20:22.157 UUID: 66fedb02-6698-41ab-919b-fa8e74c1f511 00:20:22.157 Thin Provisioning: Not Supported 00:20:22.157 Per-NS Atomic Units: Yes 00:20:22.157 Atomic Boundary Size (Normal): 0 00:20:22.157 Atomic Boundary Size (PFail): 0 00:20:22.157 Atomic Boundary Offset: 0 00:20:22.157 Maximum Single Source Range Length: 65535 00:20:22.157 Maximum Copy Length: 65535 00:20:22.157 Maximum Source Range Count: 1 00:20:22.157 NGUID/EUI64 Never Reused: No 00:20:22.157 Namespace Write Protected: No 00:20:22.157 Number of LBA Formats: 1 00:20:22.157 Current LBA Format: LBA Format #00 00:20:22.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:22.157 00:20:22.157 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.416 rmmod nvme_tcp 00:20:22.416 rmmod nvme_fabrics 00:20:22.416 rmmod nvme_keyring 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1544980 ']' 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1544980 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1544980 ']' 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1544980 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.416 10:27:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1544980 00:20:22.416 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.416 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.416 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1544980' 00:20:22.416 killing process with pid 1544980 00:20:22.416 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1544980 00:20:22.416 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1544980 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.673 10:27:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.606 00:20:24.606 real 0m4.968s 00:20:24.606 user 0m4.229s 00:20:24.606 sys 0m1.557s 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:24.606 ************************************ 00:20:24.606 END TEST nvmf_identify 00:20:24.606 ************************************ 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.606 ************************************ 00:20:24.606 START TEST nvmf_perf 00:20:24.606 ************************************ 00:20:24.606 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:24.866 * Looking for test storage... 00:20:24.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.866 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.867 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.867 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.867 10:27:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:26.779 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:26.779 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:26.779 Found net devices under 0000:08:00.0: cvl_0_0 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:26.779 Found net devices under 0000:08:00.1: cvl_0_1 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.779 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:20:26.780 00:20:26.780 --- 10.0.0.2 ping statistics --- 00:20:26.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.780 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:20:26.780 00:20:26.780 --- 10.0.0.1 ping statistics --- 00:20:26.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.780 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1546592 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1546592 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1546592 ']' 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.780 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.780 [2024-07-25 10:27:16.277003] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:26.780 [2024-07-25 10:27:16.277100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.780 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.780 [2024-07-25 10:27:16.344452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.780 [2024-07-25 10:27:16.464744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.780 [2024-07-25 10:27:16.464811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.780 [2024-07-25 10:27:16.464827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.780 [2024-07-25 10:27:16.464840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.780 [2024-07-25 10:27:16.464852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.780 [2024-07-25 10:27:16.464942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.780 [2024-07-25 10:27:16.465024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.780 [2024-07-25 10:27:16.465080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.780 [2024-07-25 10:27:16.465076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:27.038 10:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:30.325 10:27:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:30.325 10:27:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:30.325 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:20:30.325 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:30.893 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:30.893 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:20:30.893 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:30.893 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:30.893 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:30.893 [2024-07-25 10:27:20.662051] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.152 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.410 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:31.410 10:27:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:31.668 10:27:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:31.668 10:27:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:31.927 10:27:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.185 [2024-07-25 10:27:21.890409] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.185 10:27:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:32.444 10:27:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:20:32.444 10:27:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:32.444 10:27:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:32.444 10:27:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:33.823 Initializing NVMe Controllers 00:20:33.823 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:20:33.823 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:20:33.823 Initialization complete. Launching workers. 00:20:33.823 ======================================================== 00:20:33.823 Latency(us) 00:20:33.823 Device Information : IOPS MiB/s Average min max 00:20:33.823 PCIE (0000:84:00.0) NSID 1 from core 0: 65375.76 255.37 488.75 53.31 8299.71 00:20:33.823 ======================================================== 00:20:33.823 Total : 65375.76 255.37 488.75 53.31 8299.71 00:20:33.823 00:20:33.823 10:27:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:33.823 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.202 Initializing NVMe Controllers 00:20:35.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:35.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:35.202 Initialization complete. Launching workers. 00:20:35.202 ======================================================== 00:20:35.202 Latency(us) 00:20:35.202 Device Information : IOPS MiB/s Average min max 00:20:35.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.73 0.41 9792.71 232.09 45770.81 00:20:35.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.82 0.28 14034.03 4996.97 47898.79 00:20:35.202 ======================================================== 00:20:35.202 Total : 175.55 0.69 11527.80 232.09 47898.79 00:20:35.202 00:20:35.202 10:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.202 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.583 Initializing NVMe Controllers 00:20:36.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.583 Initialization complete. Launching workers. 00:20:36.583 ======================================================== 00:20:36.583 Latency(us) 00:20:36.583 Device Information : IOPS MiB/s Average min max 00:20:36.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7646.70 29.87 4185.60 663.21 9153.73 00:20:36.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3828.33 14.95 8441.74 5651.67 46808.07 00:20:36.583 ======================================================== 00:20:36.583 Total : 11475.03 44.82 5605.54 663.21 46808.07 00:20:36.583 00:20:36.583 10:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:36.583 10:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:36.583 10:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.583 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.122 Initializing NVMe Controllers 00:20:39.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.122 Controller IO queue size 128, less than required. 00:20:39.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.122 Controller IO queue size 128, less than required. 00:20:39.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:39.122 Initialization complete. Launching workers. 00:20:39.122 ======================================================== 00:20:39.122 Latency(us) 00:20:39.122 Device Information : IOPS MiB/s Average min max 00:20:39.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1271.31 317.83 102735.67 63696.60 143781.17 00:20:39.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.91 144.98 224553.11 128605.78 332267.47 00:20:39.122 ======================================================== 00:20:39.122 Total : 1851.22 462.80 140896.14 63696.60 332267.47 00:20:39.122 00:20:39.122 10:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:39.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.122 No valid NVMe controllers or AIO or URING devices found 00:20:39.122 Initializing NVMe Controllers 00:20:39.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.122 Controller IO queue size 128, less than required. 00:20:39.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:39.122 Controller IO queue size 128, less than required. 00:20:39.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:39.122 WARNING: Some requested NVMe devices were skipped 00:20:39.122 10:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:39.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.655 Initializing NVMe Controllers 00:20:41.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.655 Controller IO queue size 128, less than required. 00:20:41.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:41.655 Controller IO queue size 128, less than required. 00:20:41.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:41.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:41.655 Initialization complete. Launching workers. 00:20:41.655 00:20:41.655 ==================== 00:20:41.655 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:41.655 TCP transport: 00:20:41.655 polls: 21360 00:20:41.655 idle_polls: 7471 00:20:41.655 sock_completions: 13889 00:20:41.655 nvme_completions: 4031 00:20:41.655 submitted_requests: 6056 00:20:41.655 queued_requests: 1 00:20:41.655 00:20:41.655 ==================== 00:20:41.655 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:41.655 TCP transport: 00:20:41.655 polls: 21571 00:20:41.655 idle_polls: 6621 00:20:41.655 sock_completions: 14950 00:20:41.655 nvme_completions: 4469 00:20:41.655 submitted_requests: 6654 00:20:41.655 queued_requests: 1 00:20:41.655 ======================================================== 00:20:41.655 Latency(us) 00:20:41.655 Device Information : IOPS MiB/s Average min max 00:20:41.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1007.49 251.87 130745.54 79360.90 195014.52 00:20:41.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1116.99 279.25 116759.74 49858.87 175244.10 00:20:41.655 ======================================================== 00:20:41.655 Total : 2124.48 531.12 123392.21 49858.87 195014.52 00:20:41.655 00:20:41.655 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:41.655 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.915 rmmod nvme_tcp 00:20:41.915 rmmod nvme_fabrics 00:20:41.915 rmmod nvme_keyring 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1546592 ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1546592 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1546592 ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1546592 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1546592 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1546592' 00:20:41.915 killing process with pid 1546592 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1546592 00:20:41.915 10:27:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1546592 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.821 10:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.728 00:20:45.728 real 0m20.874s 00:20:45.728 user 1m6.255s 00:20:45.728 sys 0m4.624s 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:45.728 ************************************ 00:20:45.728 END TEST nvmf_perf 00:20:45.728 ************************************ 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.728 ************************************ 00:20:45.728 START TEST nvmf_fio_host 00:20:45.728 ************************************ 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:45.728 * Looking for test storage... 00:20:45.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.728 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.729 10:27:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:47.634 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:47.634 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.634 10:27:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.634 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.634 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:47.635 Found net devices under 0000:08:00.0: cvl_0_0 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:47.635 Found net devices under 0000:08:00.1: cvl_0_1 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:20:47.635 00:20:47.635 --- 10.0.0.2 ping statistics --- 00:20:47.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.635 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:47.635 00:20:47.635 --- 10.0.0.1 ping statistics --- 00:20:47.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.635 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1549635 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1549635 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1549635 ']' 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.635 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.635 [2024-07-25 10:27:37.206518] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:47.635 [2024-07-25 10:27:37.206616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.635 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.635 [2024-07-25 10:27:37.275005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.635 [2024-07-25 10:27:37.391838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.635 [2024-07-25 10:27:37.391899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.635 [2024-07-25 10:27:37.391915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.635 [2024-07-25 10:27:37.391928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.635 [2024-07-25 10:27:37.391939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.635 [2024-07-25 10:27:37.392016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.635 [2024-07-25 10:27:37.392089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.635 [2024-07-25 10:27:37.392093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.635 [2024-07-25 10:27:37.392042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.895 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.895 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:47.895 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:48.153 [2024-07-25 10:27:37.774417] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.153 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:48.153 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.153 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.153 10:27:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:48.411 Malloc1 00:20:48.411 10:27:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.671 10:27:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:49.238 10:27:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.496 [2024-07-25 10:27:39.020665] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.496 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:49.755 10:27:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.013 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:50.013 fio-3.35 00:20:50.013 Starting 1 thread 00:20:50.013 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.543 00:20:52.543 test: (groupid=0, jobs=1): err= 0: pid=1549960: Thu Jul 25 10:27:41 2024 00:20:52.543 read: IOPS=7785, BW=30.4MiB/s (31.9MB/s)(61.1MiB/2008msec) 00:20:52.543 slat (usec): min=2, max=213, avg= 2.83, stdev= 2.48 00:20:52.543 clat (usec): min=2907, max=14942, avg=9044.92, stdev=725.66 00:20:52.543 lat (usec): min=2946, max=14945, avg=9047.76, stdev=725.47 00:20:52.543 clat percentiles (usec): 00:20:52.543 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:20:52.543 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:20:52.543 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:20:52.543 | 99.00th=[10683], 99.50th=[10683], 99.90th=[14091], 99.95th=[14484], 00:20:52.543 | 99.99th=[14877] 00:20:52.544 bw ( KiB/s): min=30224, max=31544, per=99.94%, avg=31124.00, stdev=609.14, samples=4 00:20:52.544 iops : min= 7556, max= 7886, avg=7781.00, stdev=152.28, samples=4 00:20:52.544 write: IOPS=7769, BW=30.3MiB/s (31.8MB/s)(60.9MiB/2008msec); 0 zone resets 00:20:52.544 slat (usec): min=2, max=194, avg= 2.94, stdev= 1.83 00:20:52.544 clat (usec): min=2155, max=14474, avg=7351.47, stdev=603.89 00:20:52.544 lat (usec): min=2168, max=14477, avg=7354.41, stdev=603.80 00:20:52.544 clat percentiles (usec): 00:20:52.544 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:20:52.544 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7504], 00:20:52.544 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:20:52.544 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[12387], 99.95th=[12780], 00:20:52.544 | 99.99th=[14222] 00:20:52.544 bw ( KiB/s): min=30784, max=31336, per=100.00%, avg=31090.00, stdev=273.05, samples=4 00:20:52.544 iops : min= 7696, max= 7834, avg=7772.50, stdev=68.26, samples=4 00:20:52.544 lat (msec) : 4=0.08%, 10=96.18%, 20=3.74% 00:20:52.544 cpu : usr=68.71%, sys=28.45%, ctx=67, majf=0, minf=40 00:20:52.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:52.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.544 issued rwts: total=15634,15601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.544 00:20:52.544 Run status group 0 (all jobs): 00:20:52.544 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=61.1MiB (64.0MB), run=2008-2008msec 00:20:52.544 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.9MiB (63.9MB), run=2008-2008msec 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:52.544 10:27:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:52.544 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:52.544 fio-3.35 00:20:52.544 Starting 1 thread 00:20:52.544 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.072 00:20:55.072 test: (groupid=0, jobs=1): err= 0: pid=1550253: Thu Jul 25 10:27:44 2024 00:20:55.072 read: IOPS=7644, BW=119MiB/s (125MB/s)(240MiB/2006msec) 00:20:55.072 slat (usec): min=3, max=119, avg= 3.91, stdev= 1.73 00:20:55.072 clat (usec): min=3054, max=18275, avg=9644.69, stdev=2117.98 00:20:55.072 lat (usec): min=3058, max=18279, avg=9648.61, stdev=2117.98 00:20:55.072 clat percentiles (usec): 00:20:55.072 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7832], 00:20:55.072 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10159], 00:20:55.072 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12387], 95.00th=[13173], 00:20:55.072 | 99.00th=[15008], 99.50th=[15795], 99.90th=[17957], 99.95th=[17957], 00:20:55.072 | 99.99th=[18220] 00:20:55.072 bw ( KiB/s): min=56864, max=67904, per=51.02%, avg=62400.00, stdev=6337.13, samples=4 00:20:55.072 iops : min= 3554, max= 4244, avg=3900.00, stdev=396.07, samples=4 00:20:55.072 write: IOPS=4510, BW=70.5MiB/s (73.9MB/s)(127MiB/1807msec); 0 zone resets 00:20:55.072 slat (usec): min=32, max=132, avg=34.77, stdev= 4.17 00:20:55.072 clat (usec): min=3883, max=21646, avg=12518.31, stdev=2175.00 00:20:55.072 lat (usec): min=3922, max=21679, avg=12553.07, stdev=2175.03 00:20:55.072 clat percentiles (usec): 00:20:55.072 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:20:55.072 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[12911], 00:20:55.072 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15533], 95.00th=[16319], 00:20:55.072 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21103], 99.95th=[21365], 00:20:55.072 | 99.99th=[21627] 00:20:55.072 bw ( KiB/s): min=57984, max=71328, per=90.02%, avg=64960.00, stdev=7282.75, samples=4 00:20:55.072 iops : min= 3624, max= 4458, avg=4060.00, stdev=455.17, samples=4 00:20:55.072 lat (msec) : 4=0.11%, 10=41.64%, 20=58.18%, 50=0.08% 00:20:55.072 cpu : usr=77.61%, sys=20.15%, ctx=41, majf=0, minf=62 00:20:55.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:55.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.072 issued rwts: total=15335,8150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.072 00:20:55.072 Run status group 0 (all jobs): 00:20:55.072 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=240MiB (251MB), run=2006-2006msec 00:20:55.072 WRITE: bw=70.5MiB/s (73.9MB/s), 70.5MiB/s-70.5MiB/s (73.9MB/s-73.9MB/s), io=127MiB (134MB), run=1807-1807msec 00:20:55.072 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.073 rmmod nvme_tcp 00:20:55.073 rmmod nvme_fabrics 00:20:55.073 rmmod nvme_keyring 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1549635 ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1549635 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1549635 ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1549635 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549635 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549635' 00:20:55.073 killing process with pid 1549635 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1549635 00:20:55.073 10:27:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1549635 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.332 10:27:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.902 00:20:57.902 real 0m11.830s 00:20:57.902 user 0m35.451s 00:20:57.902 sys 0m3.727s 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 ************************************ 00:20:57.902 END TEST nvmf_fio_host 00:20:57.902 ************************************ 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 ************************************ 00:20:57.902 START TEST nvmf_failover 00:20:57.902 ************************************ 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:57.902 * Looking for test storage... 00:20:57.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:57.902 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.903 10:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:59.281 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:59.281 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:59.281 Found net devices under 0000:08:00.0: cvl_0_0 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:59.281 Found net devices under 0000:08:00.1: cvl_0_1 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:20:59.281 00:20:59.281 --- 10.0.0.2 ping statistics --- 00:20:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.281 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:20:59.281 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:59.281 00:20:59.281 --- 10.0.0.1 ping statistics --- 00:20:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.281 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1551951 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1551951 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1551951 ']' 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.282 10:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.282 [2024-07-25 10:27:49.012718] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:20:59.282 [2024-07-25 10:27:49.012817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.282 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.540 [2024-07-25 10:27:49.079435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.540 [2024-07-25 10:27:49.195725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.540 [2024-07-25 10:27:49.195794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.540 [2024-07-25 10:27:49.195811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.540 [2024-07-25 10:27:49.195824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.540 [2024-07-25 10:27:49.195835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.540 [2024-07-25 10:27:49.195920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.540 [2024-07-25 10:27:49.196003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.540 [2024-07-25 10:27:49.196008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.540 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.540 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:59.540 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.540 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.540 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.798 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.798 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.056 [2024-07-25 10:27:49.606510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.056 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:00.314 Malloc0 00:21:00.314 10:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.572 10:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.830 10:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.088 [2024-07-25 10:27:50.829678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.088 10:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:01.655 [2024-07-25 10:27:51.126561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:01.655 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:01.655 [2024-07-25 10:27:51.427561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1552180 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1552180 /var/tmp/bdevperf.sock 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1552180 ']' 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.914 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.172 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.172 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:02.172 10:27:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.740 NVMe0n1 00:21:02.740 10:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.309 00:21:03.309 10:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1552292 00:21:03.309 10:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.309 10:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:04.241 10:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.499 [2024-07-25 10:27:54.073091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c20580 is same with the state(5) to be set 00:21:04.499 10:27:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:07.789 10:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:07.789 00:21:07.789 10:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:08.048 [2024-07-25 10:27:57.809288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.048 [2024-07-25 10:27:57.809800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21340 is same with the state(5) to be set 00:21:08.307 10:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:11.597 10:28:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.597 [2024-07-25 10:28:01.113089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.597 10:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:12.533 10:28:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.794 10:28:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1552292 00:21:19.376 0 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1552180 ']' 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552180' 00:21:19.376 killing process with pid 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1552180 00:21:19.376 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.376 [2024-07-25 10:27:51.497391] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:21:19.376 [2024-07-25 10:27:51.497511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552180 ] 00:21:19.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.376 [2024-07-25 10:27:51.569235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.377 [2024-07-25 10:27:51.719532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.377 Running I/O for 15 seconds... 00:21:19.377 [2024-07-25 10:27:54.074304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.074987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.075001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.075033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.075065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.377 [2024-07-25 10:27:54.075597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.377 [2024-07-25 10:27:54.075614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.377 [2024-07-25 10:27:54.075629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.075969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.075985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.378 [2024-07-25 10:27:54.076461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.378 [2024-07-25 10:27:54.076890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.378 [2024-07-25 10:27:54.076906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.076921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.076952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.076969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.076985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.379 [2024-07-25 10:27:54.077824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67280 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.077892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.077924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.077937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67288 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.077965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.077977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.077990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67296 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.078003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.078022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.078034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.078047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67304 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.078062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.078076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.078098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.078110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67312 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.078125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.078139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.078151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.078164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67320 len:8 PRP1 0x0 PRP2 0x0 00:21:19.379 [2024-07-25 10:27:54.078178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.379 [2024-07-25 10:27:54.078192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.379 [2024-07-25 10:27:54.078204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.379 [2024-07-25 10:27:54.078216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67328 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67336 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67344 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67352 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67360 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67368 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67376 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67384 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67400 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67408 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67416 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67432 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.078954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.380 [2024-07-25 10:27:54.078971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.380 [2024-07-25 10:27:54.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67440 len:8 PRP1 0x0 PRP2 0x0 00:21:19.380 [2024-07-25 10:27:54.078998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.079060] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b6980 was disconnected and freed. reset controller. 00:21:19.380 [2024-07-25 10:27:54.079083] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:19.380 [2024-07-25 10:27:54.079123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.380 [2024-07-25 10:27:54.079142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.079159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.380 [2024-07-25 10:27:54.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.079200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.380 [2024-07-25 10:27:54.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.079249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.380 [2024-07-25 10:27:54.079265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:54.079280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.380 [2024-07-25 10:27:54.083400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.380 [2024-07-25 10:27:54.083441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090430 (9): Bad file descriptor 00:21:19.380 [2024-07-25 10:27:54.118976] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.380 [2024-07-25 10:27:57.810634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.380 [2024-07-25 10:27:57.810903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.380 [2024-07-25 10:27:57.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.810936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.810953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.810968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.810985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.381 [2024-07-25 10:27:57.811737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.381 [2024-07-25 10:27:57.811976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.381 [2024-07-25 10:27:57.811991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.812982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.812999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.382 [2024-07-25 10:27:57.813288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.382 [2024-07-25 10:27:57.813302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.383 [2024-07-25 10:27:57.813953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.813970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.813985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.383 [2024-07-25 10:27:57.814475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.383 [2024-07-25 10:27:57.814543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32208 len:8 PRP1 0x0 PRP2 0x0 00:21:19.383 [2024-07-25 10:27:57.814557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.383 [2024-07-25 10:27:57.814656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.383 [2024-07-25 10:27:57.814676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.383 [2024-07-25 10:27:57.814691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.814708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.384 [2024-07-25 10:27:57.814723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.814739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.384 [2024-07-25 10:27:57.814753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.814768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090430 is same with the state(5) to be set 00:21:19.384 [2024-07-25 10:27:57.815066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32216 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32224 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32232 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31536 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31544 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31560 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31568 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31576 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31592 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31216 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31224 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31232 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31240 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31248 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.815970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.815982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.815994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31256 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.816009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.816023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.816035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.816047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31264 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.816061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.816076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.816089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.816102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31272 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.816116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.816130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.816142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.816159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31280 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.816174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.816189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.816201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.384 [2024-07-25 10:27:57.816214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31288 len:8 PRP1 0x0 PRP2 0x0 00:21:19.384 [2024-07-25 10:27:57.816228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.384 [2024-07-25 10:27:57.816242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.384 [2024-07-25 10:27:57.816254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31296 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31304 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31312 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31320 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31328 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31336 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31344 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31352 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31360 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31368 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31376 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31384 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31392 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.816953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.816967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.816979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31400 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.816993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31408 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31416 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31424 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31432 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31440 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.385 [2024-07-25 10:27:57.817304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31448 len:8 PRP1 0x0 PRP2 0x0 00:21:19.385 [2024-07-25 10:27:57.817318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.385 [2024-07-25 10:27:57.817333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.385 [2024-07-25 10:27:57.817345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31456 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31464 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31600 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31608 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31624 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31632 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31640 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31656 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31664 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.817948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.817961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.817973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31672 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.817987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31680 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31688 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31696 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31704 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31712 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31720 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31736 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31744 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.386 [2024-07-25 10:27:57.818589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31760 len:8 PRP1 0x0 PRP2 0x0 00:21:19.386 [2024-07-25 10:27:57.818603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.386 [2024-07-25 10:27:57.818618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.386 [2024-07-25 10:27:57.818630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31768 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.818684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31776 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.818743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31784 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.818797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31792 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.818855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31800 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.818909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.818922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31808 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.818951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31816 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31824 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31832 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31840 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31856 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31864 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.825952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.825966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.825981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.825993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31880 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31888 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31896 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31904 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31912 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31920 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31928 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31936 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 PRP1 0x0 PRP2 0x0 00:21:19.387 [2024-07-25 10:27:57.826453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.387 [2024-07-25 10:27:57.826468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.387 [2024-07-25 10:27:57.826503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.387 [2024-07-25 10:27:57.826519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31952 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31968 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31976 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.826949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.826961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.826973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.826988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31472 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31480 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31488 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31496 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31504 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31512 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31520 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.388 [2024-07-25 10:27:57.827788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.388 [2024-07-25 10:27:57.827806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31528 len:8 PRP1 0x0 PRP2 0x0 00:21:19.388 [2024-07-25 10:27:57.827822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.388 [2024-07-25 10:27:57.827837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.827849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.827861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.827875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.827890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.827905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.827918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.827932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.827946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.827963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32200 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.389 [2024-07-25 10:27:57.828791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.389 [2024-07-25 10:27:57.828803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32208 len:8 PRP1 0x0 PRP2 0x0 00:21:19.389 [2024-07-25 10:27:57.828821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:27:57.828884] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20afc30 was disconnected and freed. reset controller. 00:21:19.389 [2024-07-25 10:27:57.828906] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:19.389 [2024-07-25 10:27:57.828923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.389 [2024-07-25 10:27:57.828994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090430 (9): Bad file descriptor 00:21:19.389 [2024-07-25 10:27:57.833056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.389 [2024-07-25 10:27:58.022981] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.389 [2024-07-25 10:28:02.419424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.389 [2024-07-25 10:28:02.419737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.389 [2024-07-25 10:28:02.419756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.419982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.419999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.390 [2024-07-25 10:28:02.420847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.390 [2024-07-25 10:28:02.420864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.420878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.420895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.420910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.420926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.420940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.420957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.420972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.421980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.421997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.422011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.422027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.422042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.422059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.391 [2024-07-25 10:28:02.422073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.422090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.391 [2024-07-25 10:28:02.422105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.422121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.391 [2024-07-25 10:28:02.422136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.391 [2024-07-25 10:28:02.422153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.392 [2024-07-25 10:28:02.422675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.422707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.422725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65968 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66000 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66008 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66016 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66032 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66040 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66048 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66056 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.392 [2024-07-25 10:28:02.423703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.392 [2024-07-25 10:28:02.423715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 PRP1 0x0 PRP2 0x0 00:21:19.392 [2024-07-25 10:28:02.423729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.392 [2024-07-25 10:28:02.423744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.423756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.423768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.423781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.423796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.423808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.423820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65088 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.423834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.423854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.423870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.423883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65096 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.423897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.423916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.423928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.423941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65104 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.423955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.423969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.423981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.423993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65120 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65128 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65136 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65200 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66080 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.424953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.424967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.424980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.393 [2024-07-25 10:28:02.424992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65248 len:8 PRP1 0x0 PRP2 0x0 00:21:19.393 [2024-07-25 10:28:02.425006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.393 [2024-07-25 10:28:02.425020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.393 [2024-07-25 10:28:02.425032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.425044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.425058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.425072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.425084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65264 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65304 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65320 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65336 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65360 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.443948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.443963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.443977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.443988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.444030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.444041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.444087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.444099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65400 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.444140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.444151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65408 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.444193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.444204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65416 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.394 [2024-07-25 10:28:02.444246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.394 [2024-07-25 10:28:02.444258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.394 [2024-07-25 10:28:02.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65424 len:8 PRP1 0x0 PRP2 0x0 00:21:19.394 [2024-07-25 10:28:02.444285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65432 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65440 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65456 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65464 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65472 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65480 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65488 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65496 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65504 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65512 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.444954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.444966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65520 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.444980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.444995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65528 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65544 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65552 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65560 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65568 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65576 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65584 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.395 [2024-07-25 10:28:02.445431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.395 [2024-07-25 10:28:02.445443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.395 [2024-07-25 10:28:02.445455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65592 len:8 PRP1 0x0 PRP2 0x0 00:21:19.395 [2024-07-25 10:28:02.445469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65600 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65608 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65616 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65624 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65632 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65640 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65648 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.445953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.445965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.445978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65664 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.445991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65672 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65680 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65704 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65712 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65720 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65728 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65736 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65744 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65752 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65760 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65768 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.396 [2024-07-25 10:28:02.446764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.396 [2024-07-25 10:28:02.446776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65776 len:8 PRP1 0x0 PRP2 0x0 00:21:19.396 [2024-07-25 10:28:02.446790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.396 [2024-07-25 10:28:02.446804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65784 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65792 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65800 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65808 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65816 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65824 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65832 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65064 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65072 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65840 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65848 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65856 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65864 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.457950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.457965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.457977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.457993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65872 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65880 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65888 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65896 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65904 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65912 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65920 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65928 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65936 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.397 [2024-07-25 10:28:02.458509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65944 len:8 PRP1 0x0 PRP2 0x0 00:21:19.397 [2024-07-25 10:28:02.458523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.397 [2024-07-25 10:28:02.458538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.397 [2024-07-25 10:28:02.458550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.398 [2024-07-25 10:28:02.458563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65952 len:8 PRP1 0x0 PRP2 0x0 00:21:19.398 [2024-07-25 10:28:02.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.398 [2024-07-25 10:28:02.458605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.398 [2024-07-25 10:28:02.458617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65960 len:8 PRP1 0x0 PRP2 0x0 00:21:19.398 [2024-07-25 10:28:02.458632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.398 [2024-07-25 10:28:02.458659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.398 [2024-07-25 10:28:02.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65968 len:8 PRP1 0x0 PRP2 0x0 00:21:19.398 [2024-07-25 10:28:02.458686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458753] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c0210 was disconnected and freed. reset controller. 00:21:19.398 [2024-07-25 10:28:02.458780] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:19.398 [2024-07-25 10:28:02.458822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.398 [2024-07-25 10:28:02.458842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.398 [2024-07-25 10:28:02.458874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.398 [2024-07-25 10:28:02.458904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.398 [2024-07-25 10:28:02.458935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.398 [2024-07-25 10:28:02.458960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.398 [2024-07-25 10:28:02.459012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090430 (9): Bad file descriptor 00:21:19.398 [2024-07-25 10:28:02.463185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.398 [2024-07-25 10:28:02.630121] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.398 00:21:19.398 Latency(us) 00:21:19.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.398 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.398 Verification LBA range: start 0x0 length 0x4000 00:21:19.398 NVMe0n1 : 15.01 7080.16 27.66 803.25 0.00 16201.84 540.07 46603.38 00:21:19.398 =================================================================================================================== 00:21:19.398 Total : 7080.16 27.66 803.25 0.00 16201.84 540.07 46603.38 00:21:19.398 Received shutdown signal, test time was about 15.000000 seconds 00:21:19.398 00:21:19.398 Latency(us) 00:21:19.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.398 =================================================================================================================== 00:21:19.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1553725 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1553725 /var/tmp/bdevperf.sock 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1553725 ']' 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.398 [2024-07-25 10:28:08.849668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:19.398 10:28:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.657 [2024-07-25 10:28:09.146528] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:19.657 10:28:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.914 NVMe0n1 00:21:19.915 10:28:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.174 00:21:20.174 10:28:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.741 00:21:20.741 10:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.741 10:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:20.999 10:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.259 10:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:24.553 10:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.553 10:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:24.553 10:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1554282 00:21:24.553 10:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.553 10:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1554282 00:21:25.933 0 00:21:25.933 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.933 [2024-07-25 10:28:08.292171] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:21:25.933 [2024-07-25 10:28:08.292272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553725 ] 00:21:25.933 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.933 [2024-07-25 10:28:08.354066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.933 [2024-07-25 10:28:08.470264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.933 [2024-07-25 10:28:10.891753] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:25.933 [2024-07-25 10:28:10.891875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.933 [2024-07-25 10:28:10.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.933 [2024-07-25 10:28:10.891921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.933 [2024-07-25 10:28:10.891935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.933 [2024-07-25 10:28:10.891951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.933 [2024-07-25 10:28:10.891968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.933 [2024-07-25 10:28:10.891984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.933 [2024-07-25 10:28:10.891999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.933 [2024-07-25 10:28:10.892014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.933 [2024-07-25 10:28:10.892073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.933 [2024-07-25 10:28:10.892110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb28430 (9): Bad file descriptor 00:21:25.933 [2024-07-25 10:28:10.899581] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.933 Running I/O for 1 seconds... 00:21:25.933 00:21:25.933 Latency(us) 00:21:25.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.933 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:25.933 Verification LBA range: start 0x0 length 0x4000 00:21:25.933 NVMe0n1 : 1.01 7256.04 28.34 0.00 0.00 17550.34 2087.44 13883.92 00:21:25.933 =================================================================================================================== 00:21:25.933 Total : 7256.04 28.34 0.00 0.00 17550.34 2087.44 13883.92 00:21:25.933 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.933 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:25.933 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.197 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.197 10:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:26.461 10:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.729 10:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1553725 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1553725 ']' 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1553725 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553725 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553725' 00:21:30.061 killing process with pid 1553725 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1553725 00:21:30.061 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1553725 00:21:30.320 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:30.320 10:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.579 rmmod nvme_tcp 00:21:30.579 rmmod nvme_fabrics 00:21:30.579 rmmod nvme_keyring 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1551951 ']' 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1551951 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1551951 ']' 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1551951 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1551951 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1551951' 00:21:30.579 killing process with pid 1551951 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1551951 00:21:30.579 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1551951 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.838 10:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.379 00:21:33.379 real 0m35.477s 00:21:33.379 user 2m5.146s 00:21:33.379 sys 0m6.317s 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:33.379 ************************************ 00:21:33.379 END TEST nvmf_failover 00:21:33.379 ************************************ 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:33.379 10:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.380 ************************************ 00:21:33.380 START TEST nvmf_host_discovery 00:21:33.380 ************************************ 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:33.380 * Looking for test storage... 00:21:33.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.380 10:28:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:34.759 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:34.759 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:34.759 Found net devices under 0000:08:00.0: cvl_0_0 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:34.759 Found net devices under 0000:08:00.1: cvl_0_1 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.759 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:21:34.760 00:21:34.760 --- 10.0.0.2 ping statistics --- 00:21:34.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.760 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:34.760 00:21:34.760 --- 10.0.0.1 ping statistics --- 00:21:34.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.760 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.760 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1556285 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1556285 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1556285 ']' 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.021 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.021 [2024-07-25 10:28:24.602747] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:21:35.021 [2024-07-25 10:28:24.602838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.021 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.021 [2024-07-25 10:28:24.668258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.021 [2024-07-25 10:28:24.786676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.021 [2024-07-25 10:28:24.786740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.021 [2024-07-25 10:28:24.786755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.021 [2024-07-25 10:28:24.786768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.021 [2024-07-25 10:28:24.786780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.021 [2024-07-25 10:28:24.786821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 [2024-07-25 10:28:24.926858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 [2024-07-25 10:28:24.935043] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 null0 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 null1 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1556403 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1556403 /tmp/host.sock 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1556403 ']' 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:35.280 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.280 10:28:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.280 [2024-07-25 10:28:25.013131] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:21:35.281 [2024-07-25 10:28:25.013225] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556403 ] 00:21:35.281 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.539 [2024-07-25 10:28:25.074187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.539 [2024-07-25 10:28:25.190923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.539 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.798 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 [2024-07-25 10:28:25.588802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:36.057 10:28:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:36.625 [2024-07-25 10:28:26.354617] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:36.625 [2024-07-25 10:28:26.354659] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:36.625 [2024-07-25 10:28:26.354684] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:36.885 [2024-07-25 10:28:26.440944] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:36.885 [2024-07-25 10:28:26.505499] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:36.885 [2024-07-25 10:28:26.505526] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:37.144 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.145 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.404 10:28:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 [2024-07-25 10:28:27.053277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.404 [2024-07-25 10:28:27.054283] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:37.404 [2024-07-25 10:28:27.054328] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.404 [2024-07-25 10:28:27.143009] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:37.404 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.405 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.664 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:37.664 10:28:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:37.664 [2024-07-25 10:28:27.208677] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:37.664 [2024-07-25 10:28:27.208705] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:37.664 [2024-07-25 10:28:27.208716] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.602 [2024-07-25 10:28:28.281102] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:38.602 [2024-07-25 10:28:28.281138] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:38.602 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:38.602 [2024-07-25 10:28:28.287011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.602 [2024-07-25 10:28:28.287045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.602 [2024-07-25 10:28:28.287063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.602 [2024-07-25 10:28:28.287078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.602 [2024-07-25 10:28:28.287093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.602 [2024-07-25 10:28:28.287108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.602 [2024-07-25 10:28:28.287124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.602 [2024-07-25 10:28:28.287138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.603 [2024-07-25 10:28:28.287153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.603 [2024-07-25 10:28:28.297019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.603 [2024-07-25 10:28:28.307075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.307307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.307339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.307357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.307382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.307418] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.307436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.307453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.307475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 [2024-07-25 10:28:28.317158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.317350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.317379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.317396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.317420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.317454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.317471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.317497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.317519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 [2024-07-25 10:28:28.327235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.327431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.327461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.327486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.327512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.327548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.327566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.327581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.327602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.603 [2024-07-25 10:28:28.337324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.337511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.337541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.337558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.337583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.337617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.337635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.337650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.337671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 [2024-07-25 10:28:28.347404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.347584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.347627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.347646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.347673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.347713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.347732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.347747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.347784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 [2024-07-25 10:28:28.357487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.357690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.357739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.357758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.357785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.357823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.357841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.357856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.357879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.603 [2024-07-25 10:28:28.367568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.603 [2024-07-25 10:28:28.367746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.603 [2024-07-25 10:28:28.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.603 [2024-07-25 10:28:28.367796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.603 [2024-07-25 10:28:28.367821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.603 [2024-07-25 10:28:28.367886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.603 [2024-07-25 10:28:28.367908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.603 [2024-07-25 10:28:28.367923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.603 [2024-07-25 10:28:28.367945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:38.603 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:38.604 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:38.604 [2024-07-25 10:28:28.377651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.604 [2024-07-25 10:28:28.377832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.604 [2024-07-25 10:28:28.377864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.604 [2024-07-25 10:28:28.377881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.604 [2024-07-25 10:28:28.377912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.604 [2024-07-25 10:28:28.377942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.604 [2024-07-25 10:28:28.377957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.604 [2024-07-25 10:28:28.377972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.604 [2024-07-25 10:28:28.377992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.862 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 [2024-07-25 10:28:28.387741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.862 [2024-07-25 10:28:28.387896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.862 [2024-07-25 10:28:28.387926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.862 [2024-07-25 10:28:28.387943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.862 [2024-07-25 10:28:28.387967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.862 [2024-07-25 10:28:28.388003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.862 [2024-07-25 10:28:28.388021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.862 [2024-07-25 10:28:28.388036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.862 [2024-07-25 10:28:28.388057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.862 [2024-07-25 10:28:28.397822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.862 [2024-07-25 10:28:28.397990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.862 [2024-07-25 10:28:28.398020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.862 [2024-07-25 10:28:28.398037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.862 [2024-07-25 10:28:28.398061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.862 [2024-07-25 10:28:28.398082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.862 [2024-07-25 10:28:28.398097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.862 [2024-07-25 10:28:28.398111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.862 [2024-07-25 10:28:28.398132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.862 [2024-07-25 10:28:28.407900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.862 [2024-07-25 10:28:28.408044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.862 [2024-07-25 10:28:28.408074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2022ee0 with addr=10.0.0.2, port=4420 00:21:38.862 [2024-07-25 10:28:28.408090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2022ee0 is same with the state(5) to be set 00:21:38.862 [2024-07-25 10:28:28.408120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022ee0 (9): Bad file descriptor 00:21:38.862 [2024-07-25 10:28:28.408156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.862 [2024-07-25 10:28:28.408174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.862 [2024-07-25 10:28:28.408189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.862 [2024-07-25 10:28:28.408210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.862 [2024-07-25 10:28:28.408261] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:38.862 [2024-07-25 10:28:28.408290] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.862 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:38.862 10:28:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.817 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.075 10:28:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.013 [2024-07-25 10:28:30.676082] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:41.013 [2024-07-25 10:28:30.676113] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:41.013 [2024-07-25 10:28:30.676138] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:41.013 [2024-07-25 10:28:30.762438] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:41.273 [2024-07-25 10:28:30.830409] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:41.273 [2024-07-25 10:28:30.830457] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.273 request: 00:21:41.273 { 00:21:41.273 "name": "nvme", 00:21:41.273 "trtype": "tcp", 00:21:41.273 "traddr": "10.0.0.2", 00:21:41.273 "adrfam": "ipv4", 00:21:41.273 "trsvcid": "8009", 00:21:41.273 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:41.273 "wait_for_attach": true, 00:21:41.273 "method": "bdev_nvme_start_discovery", 00:21:41.273 "req_id": 1 00:21:41.273 } 00:21:41.273 Got JSON-RPC error response 00:21:41.273 response: 00:21:41.273 { 00:21:41.273 "code": -17, 00:21:41.273 "message": "File exists" 00:21:41.273 } 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.273 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.274 request: 00:21:41.274 { 00:21:41.274 "name": "nvme_second", 00:21:41.274 "trtype": "tcp", 00:21:41.274 "traddr": "10.0.0.2", 00:21:41.274 "adrfam": "ipv4", 00:21:41.274 "trsvcid": "8009", 00:21:41.274 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:41.274 "wait_for_attach": true, 00:21:41.274 "method": "bdev_nvme_start_discovery", 00:21:41.274 "req_id": 1 00:21:41.274 } 00:21:41.274 Got JSON-RPC error response 00:21:41.274 response: 00:21:41.274 { 00:21:41.274 "code": -17, 00:21:41.274 "message": "File exists" 00:21:41.274 } 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:41.274 10:28:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.274 10:28:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.651 [2024-07-25 10:28:32.050912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.651 [2024-07-25 10:28:32.050963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2023c70 with addr=10.0.0.2, port=8010 00:21:42.651 [2024-07-25 10:28:32.050989] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:42.651 [2024-07-25 10:28:32.051005] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:42.651 [2024-07-25 10:28:32.051018] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:43.589 [2024-07-25 10:28:33.053282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-07-25 10:28:33.053327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2023c70 with addr=10.0.0.2, port=8010 00:21:43.589 [2024-07-25 10:28:33.053352] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:43.589 [2024-07-25 10:28:33.053367] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:43.589 [2024-07-25 10:28:33.053380] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:44.528 [2024-07-25 10:28:34.055530] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:44.528 request: 00:21:44.528 { 00:21:44.528 "name": "nvme_second", 00:21:44.528 "trtype": "tcp", 00:21:44.528 "traddr": "10.0.0.2", 00:21:44.528 "adrfam": "ipv4", 00:21:44.528 "trsvcid": "8010", 00:21:44.528 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:44.528 "wait_for_attach": false, 00:21:44.528 "attach_timeout_ms": 3000, 00:21:44.528 "method": "bdev_nvme_start_discovery", 00:21:44.528 "req_id": 1 00:21:44.528 } 00:21:44.528 Got JSON-RPC error response 00:21:44.528 response: 00:21:44.528 { 00:21:44.528 "code": -110, 00:21:44.528 "message": "Connection timed out" 00:21:44.528 } 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1556403 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.528 rmmod nvme_tcp 00:21:44.528 rmmod nvme_fabrics 00:21:44.528 rmmod nvme_keyring 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1556285 ']' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1556285 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1556285 ']' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1556285 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1556285 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1556285' 00:21:44.528 killing process with pid 1556285 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1556285 00:21:44.528 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1556285 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.788 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.789 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.789 10:28:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.695 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.695 00:21:46.695 real 0m13.771s 00:21:46.695 user 0m20.959s 00:21:46.695 sys 0m2.572s 00:21:46.695 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.695 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.695 ************************************ 00:21:46.695 END TEST nvmf_host_discovery 00:21:46.695 ************************************ 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.953 ************************************ 00:21:46.953 START TEST nvmf_host_multipath_status 00:21:46.953 ************************************ 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:46.953 * Looking for test storage... 00:21:46.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.953 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.954 10:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:48.859 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:48.859 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:48.859 Found net devices under 0000:08:00.0: cvl_0_0 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:48.859 Found net devices under 0000:08:00.1: cvl_0_1 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.859 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:21:48.860 00:21:48.860 --- 10.0.0.2 ping statistics --- 00:21:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.860 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:21:48.860 00:21:48.860 --- 10.0.0.1 ping statistics --- 00:21:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.860 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1558890 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1558890 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1558890 ']' 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.860 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:48.860 [2024-07-25 10:28:38.415568] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:21:48.860 [2024-07-25 10:28:38.415664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.860 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.860 [2024-07-25 10:28:38.480410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:48.860 [2024-07-25 10:28:38.596512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.860 [2024-07-25 10:28:38.596579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.860 [2024-07-25 10:28:38.596594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.860 [2024-07-25 10:28:38.596608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.860 [2024-07-25 10:28:38.596620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.860 [2024-07-25 10:28:38.596729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.860 [2024-07-25 10:28:38.596800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1558890 00:21:49.119 10:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.376 [2024-07-25 10:28:39.002569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.376 10:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:49.633 Malloc0 00:21:49.633 10:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:49.891 10:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.149 10:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.407 [2024-07-25 10:28:40.155606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.407 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.664 [2024-07-25 10:28:40.408226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1559110 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1559110 /var/tmp/bdevperf.sock 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1559110 ']' 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.665 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.231 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.231 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:51.231 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:51.231 10:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:51.798 Nvme0n1 00:21:51.798 10:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.364 Nvme0n1 00:21:52.364 10:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:52.364 10:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:54.895 10:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:54.895 10:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:54.895 10:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:55.154 10:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:56.115 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:56.115 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.115 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.115 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.373 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.373 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.373 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.373 10:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:56.630 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.630 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:56.630 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.630 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:56.888 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.888 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:56.888 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.888 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.146 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.146 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.146 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.146 10:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.714 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.714 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:57.714 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.714 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:57.973 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.973 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:57.973 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:58.231 10:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:58.490 10:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:59.428 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:59.428 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:59.428 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.428 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:59.686 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.686 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:59.686 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.686 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:59.943 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.943 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:59.943 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.943 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:00.201 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.201 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:00.201 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.201 10:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:00.458 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.458 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:00.458 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.458 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:00.715 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.715 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:00.715 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.715 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:00.973 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.973 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:00.973 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:01.231 10:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:01.489 10:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:02.429 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:02.429 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:02.429 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.429 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:02.995 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.995 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:02.995 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.995 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:03.254 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:03.254 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:03.254 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.254 10:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:03.512 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.512 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:03.512 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.512 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:03.768 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.768 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:03.768 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.768 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:04.026 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.026 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:04.026 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.026 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:04.284 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.284 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:04.284 10:28:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:04.542 10:28:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:05.112 10:28:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:06.050 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:06.050 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:06.050 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.050 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:06.309 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.309 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:06.309 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.309 10:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.567 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.567 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.567 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.567 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.826 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.826 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.826 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.826 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:07.084 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.084 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:07.084 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.084 10:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:07.344 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.344 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:07.602 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.602 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:07.860 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.860 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:07.860 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:08.118 10:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:08.377 10:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:09.316 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:09.316 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:09.316 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.316 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:09.576 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.576 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:09.576 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.576 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.145 10:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:10.402 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.402 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:10.402 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.402 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:10.660 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.660 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:10.660 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.660 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:10.918 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.918 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:10.918 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:11.176 10:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:11.435 10:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:12.374 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:12.374 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:12.374 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.374 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.632 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.632 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:12.632 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.632 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.197 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.197 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.197 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.197 10:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.455 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.455 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.455 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.455 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.714 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.714 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:13.714 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.714 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.972 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.972 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:13.972 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.972 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.230 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.230 10:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:14.488 10:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:14.488 10:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:14.745 10:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:15.311 10:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:16.245 10:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:16.245 10:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:16.245 10:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.245 10:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.502 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.502 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:16.502 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.502 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.760 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.760 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.760 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.760 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.017 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.017 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.017 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.017 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.274 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.274 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.274 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.275 10:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.532 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.532 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.532 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.532 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.790 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.790 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:17.790 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:18.047 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.307 10:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:19.280 10:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:19.280 10:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:19.280 10:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.280 10:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.537 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.537 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.537 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.537 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.795 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.795 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.795 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.795 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.053 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.053 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.053 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.053 10:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.311 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.311 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.311 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.311 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.878 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.878 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.878 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.878 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.136 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.136 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:21.136 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:21.394 10:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:21.651 10:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:22.589 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:22.589 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.589 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.589 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.847 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.847 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.847 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.847 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.416 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.416 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.416 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.416 10:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.416 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.416 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.416 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.416 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.983 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.983 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.984 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.984 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.242 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.242 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.242 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.242 10:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.500 10:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.500 10:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:24.500 10:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:24.759 10:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:25.018 10:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:25.954 10:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:25.954 10:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:25.954 10:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.954 10:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.520 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.778 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.778 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.778 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.778 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.036 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.036 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.036 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.036 10:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.294 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.294 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:27.294 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.294 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1559110 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1559110 ']' 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1559110 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559110 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559110' 00:22:27.554 killing process with pid 1559110 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1559110 00:22:27.554 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1559110 00:22:27.816 Connection closed with partial response: 00:22:27.816 00:22:27.816 00:22:27.816 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1559110 00:22:27.816 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.816 [2024-07-25 10:28:40.472863] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:22:27.816 [2024-07-25 10:28:40.472957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559110 ] 00:22:27.816 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.816 [2024-07-25 10:28:40.527642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.816 [2024-07-25 10:28:40.644227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.816 Running I/O for 90 seconds... 00:22:27.816 [2024-07-25 10:28:57.704471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.816 [2024-07-25 10:28:57.704545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.816 [2024-07-25 10:28:57.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.816 [2024-07-25 10:28:57.704636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.816 [2024-07-25 10:28:57.704662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.816 [2024-07-25 10:28:57.704681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.704958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.704980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.705268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.705285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.706583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.706958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.706975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.817 [2024-07-25 10:28:57.707359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.817 [2024-07-25 10:28:57.707431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.817 [2024-07-25 10:28:57.707448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.707958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.707985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.708959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.708986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.709004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.709030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.709047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.709074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.709091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.709118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.709135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.709162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.818 [2024-07-25 10:28:57.709179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:27.818 [2024-07-25 10:28:57.709210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.709958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.709975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.710817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.710866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.710918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.710967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.710998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.711015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.711046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.819 [2024-07-25 10:28:57.711063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.819 [2024-07-25 10:28:57.711094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.819 [2024-07-25 10:28:57.711112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:28:57.711786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:28:57.711804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.698763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.698827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.698849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.698875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.698893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.698918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.698935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.699092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.699150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.820 [2024-07-25 10:29:14.699237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.699714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.699731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:27.820 [2024-07-25 10:29:14.701621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.820 [2024-07-25 10:29:14.701637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.701960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.701977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.821 [2024-07-25 10:29:14.702059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.821 [2024-07-25 10:29:14.702099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.821 [2024-07-25 10:29:14.702220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.821 [2024-07-25 10:29:14.702786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.821 [2024-07-25 10:29:14.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.702826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.702843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.702867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.702883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.702907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.702928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.702953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.702970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.702994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.822 [2024-07-25 10:29:14.703237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.822 [2024-07-25 10:29:14.703254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.822 Received shutdown signal, test time was about 35.037687 seconds 00:22:27.822 00:22:27.822 Latency(us) 00:22:27.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.822 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.822 Verification LBA range: start 0x0 length 0x4000 00:22:27.822 Nvme0n1 : 35.04 7084.79 27.67 0.00 0.00 18034.48 206.32 4026531.84 00:22:27.822 =================================================================================================================== 00:22:27.822 Total : 7084.79 27.67 0.00 0.00 18034.48 206.32 4026531.84 00:22:27.822 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.085 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.085 rmmod nvme_tcp 00:22:28.085 rmmod nvme_fabrics 00:22:28.347 rmmod nvme_keyring 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1558890 ']' 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1558890 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1558890 ']' 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1558890 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558890 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558890' 00:22:28.347 killing process with pid 1558890 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1558890 00:22:28.347 10:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1558890 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.608 10:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.517 00:22:30.517 real 0m43.694s 00:22:30.517 user 2m15.458s 00:22:30.517 sys 0m10.105s 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:30.517 ************************************ 00:22:30.517 END TEST nvmf_host_multipath_status 00:22:30.517 ************************************ 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.517 ************************************ 00:22:30.517 START TEST nvmf_discovery_remove_ifc 00:22:30.517 ************************************ 00:22:30.517 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:30.777 * Looking for test storage... 00:22:30.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.777 10:29:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:32.159 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:32.159 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:32.159 Found net devices under 0000:08:00.0: cvl_0_0 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.159 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:32.159 Found net devices under 0000:08:00.1: cvl_0_1 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.160 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:22:32.419 00:22:32.419 --- 10.0.0.2 ping statistics --- 00:22:32.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.419 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:32.419 00:22:32.419 --- 10.0.0.1 ping statistics --- 00:22:32.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.419 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.419 10:29:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1564181 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1564181 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1564181 ']' 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.419 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.419 [2024-07-25 10:29:22.072633] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:22:32.419 [2024-07-25 10:29:22.072734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.419 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.419 [2024-07-25 10:29:22.137554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.679 [2024-07-25 10:29:22.254175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.679 [2024-07-25 10:29:22.254237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.679 [2024-07-25 10:29:22.254253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.679 [2024-07-25 10:29:22.254267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.679 [2024-07-25 10:29:22.254278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.679 [2024-07-25 10:29:22.254317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.679 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.679 [2024-07-25 10:29:22.403122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.679 [2024-07-25 10:29:22.411285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:32.679 null0 00:22:32.679 [2024-07-25 10:29:22.443234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1564201 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1564201 /tmp/host.sock 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1564201 ']' 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.938 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:32.939 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:32.939 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:32.939 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.939 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.939 [2024-07-25 10:29:22.515629] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:22:32.939 [2024-07-25 10:29:22.515727] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564201 ] 00:22:32.939 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.939 [2024-07-25 10:29:22.576441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.939 [2024-07-25 10:29:22.693210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.197 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.198 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:33.198 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.198 10:29:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.136 [2024-07-25 10:29:23.880724] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:34.136 [2024-07-25 10:29:23.880756] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:34.136 [2024-07-25 10:29:23.880780] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:34.394 [2024-07-25 10:29:24.007210] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:34.654 [2024-07-25 10:29:24.193046] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:34.654 [2024-07-25 10:29:24.193114] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:34.654 [2024-07-25 10:29:24.193157] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:34.654 [2024-07-25 10:29:24.193182] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:34.654 [2024-07-25 10:29:24.193212] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.654 [2024-07-25 10:29:24.199553] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc9eda0 was disconnected and freed. delete nvme_qpair. 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.654 10:29:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.595 10:29:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.970 10:29:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.909 10:29:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.848 10:29:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.782 10:29:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.041 [2024-07-25 10:29:29.634092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:40.041 [2024-07-25 10:29:29.634159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.041 [2024-07-25 10:29:29.634182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.041 [2024-07-25 10:29:29.634200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.041 [2024-07-25 10:29:29.634214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.041 [2024-07-25 10:29:29.634230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.041 [2024-07-25 10:29:29.634244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.041 [2024-07-25 10:29:29.634259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.041 [2024-07-25 10:29:29.634274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.041 [2024-07-25 10:29:29.634290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.041 [2024-07-25 10:29:29.634305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.041 [2024-07-25 10:29:29.634320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65620 is same with the state(5) to be set 00:22:40.041 [2024-07-25 10:29:29.644111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc65620 (9): Bad file descriptor 00:22:40.041 [2024-07-25 10:29:29.654158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.976 [2024-07-25 10:29:30.707535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:40.976 [2024-07-25 10:29:30.707593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc65620 with addr=10.0.0.2, port=4420 00:22:40.976 [2024-07-25 10:29:30.707623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65620 is same with the state(5) to be set 00:22:40.976 [2024-07-25 10:29:30.707665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc65620 (9): Bad file descriptor 00:22:40.976 [2024-07-25 10:29:30.708070] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:40.976 [2024-07-25 10:29:30.708109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:40.976 [2024-07-25 10:29:30.708123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:40.976 [2024-07-25 10:29:30.708136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:40.976 [2024-07-25 10:29:30.708164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.976 [2024-07-25 10:29:30.708179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.976 10:29:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.353 [2024-07-25 10:29:31.710679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.353 [2024-07-25 10:29:31.710710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.353 [2024-07-25 10:29:31.710726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.353 [2024-07-25 10:29:31.710739] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:42.353 [2024-07-25 10:29:31.710760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.353 [2024-07-25 10:29:31.710797] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:42.353 [2024-07-25 10:29:31.710834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.353 [2024-07-25 10:29:31.710856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.353 [2024-07-25 10:29:31.710876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.353 [2024-07-25 10:29:31.710891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.353 [2024-07-25 10:29:31.710905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.353 [2024-07-25 10:29:31.710919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.353 [2024-07-25 10:29:31.710934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.353 [2024-07-25 10:29:31.710949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.353 [2024-07-25 10:29:31.710964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.353 [2024-07-25 10:29:31.710978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.353 [2024-07-25 10:29:31.710992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:42.353 [2024-07-25 10:29:31.711139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc64a80 (9): Bad file descriptor 00:22:42.353 [2024-07-25 10:29:31.712159] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:42.353 [2024-07-25 10:29:31.712182] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:42.353 10:29:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:43.292 10:29:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.253 [2024-07-25 10:29:33.767326] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:44.253 [2024-07-25 10:29:33.767373] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:44.253 [2024-07-25 10:29:33.767399] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.253 [2024-07-25 10:29:33.894798] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.253 10:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.253 [2024-07-25 10:29:33.957523] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:44.253 [2024-07-25 10:29:33.957593] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:44.253 [2024-07-25 10:29:33.957636] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:44.253 [2024-07-25 10:29:33.957661] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:44.253 [2024-07-25 10:29:33.957675] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:44.253 [2024-07-25 10:29:33.965176] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc6c250 was disconnected and freed. delete nvme_qpair. 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.197 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1564201 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1564201 ']' 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1564201 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.456 10:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564201 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564201' 00:22:45.456 killing process with pid 1564201 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1564201 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1564201 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.456 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.456 rmmod nvme_tcp 00:22:45.714 rmmod nvme_fabrics 00:22:45.714 rmmod nvme_keyring 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1564181 ']' 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1564181 00:22:45.714 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1564181 ']' 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1564181 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564181 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564181' 00:22:45.715 killing process with pid 1564181 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1564181 00:22:45.715 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1564181 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.975 10:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.880 00:22:47.880 real 0m17.325s 00:22:47.880 user 0m25.719s 00:22:47.880 sys 0m2.664s 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.880 ************************************ 00:22:47.880 END TEST nvmf_discovery_remove_ifc 00:22:47.880 ************************************ 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.880 ************************************ 00:22:47.880 START TEST nvmf_identify_kernel_target 00:22:47.880 ************************************ 00:22:47.880 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:48.141 * Looking for test storage... 00:22:48.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.141 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.142 10:29:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:50.047 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:50.047 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.047 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:50.048 Found net devices under 0000:08:00.0: cvl_0_0 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:50.048 Found net devices under 0000:08:00.1: cvl_0_1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:22:50.048 00:22:50.048 --- 10.0.0.2 ping statistics --- 00:22:50.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.048 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:50.048 00:22:50.048 --- 10.0.0.1 ping statistics --- 00:22:50.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.048 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:50.048 10:29:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:50.984 Waiting for block devices as requested 00:22:50.984 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:50.984 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:50.984 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:51.243 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:51.243 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:51.243 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:51.243 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:51.243 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:51.502 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:51.502 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:51.502 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:51.761 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:51.761 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:51.761 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:52.020 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:52.020 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:52.020 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:52.020 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:52.277 No valid GPT data, bailing 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:22:52.277 00:22:52.277 Discovery Log Number of Records 2, Generation counter 2 00:22:52.277 =====Discovery Log Entry 0====== 00:22:52.277 trtype: tcp 00:22:52.277 adrfam: ipv4 00:22:52.277 subtype: current discovery subsystem 00:22:52.277 treq: not specified, sq flow control disable supported 00:22:52.277 portid: 1 00:22:52.277 trsvcid: 4420 00:22:52.277 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:52.277 traddr: 10.0.0.1 00:22:52.277 eflags: none 00:22:52.277 sectype: none 00:22:52.277 =====Discovery Log Entry 1====== 00:22:52.277 trtype: tcp 00:22:52.277 adrfam: ipv4 00:22:52.277 subtype: nvme subsystem 00:22:52.277 treq: not specified, sq flow control disable supported 00:22:52.277 portid: 1 00:22:52.277 trsvcid: 4420 00:22:52.277 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:52.277 traddr: 10.0.0.1 00:22:52.277 eflags: none 00:22:52.277 sectype: none 00:22:52.277 10:29:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:52.277 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:52.277 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.277 ===================================================== 00:22:52.277 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.277 ===================================================== 00:22:52.277 Controller Capabilities/Features 00:22:52.277 ================================ 00:22:52.277 Vendor ID: 0000 00:22:52.277 Subsystem Vendor ID: 0000 00:22:52.277 Serial Number: 90357b6021c1c52fb93a 00:22:52.277 Model Number: Linux 00:22:52.277 Firmware Version: 6.7.0-68 00:22:52.277 Recommended Arb Burst: 0 00:22:52.277 IEEE OUI Identifier: 00 00 00 00:22:52.277 Multi-path I/O 00:22:52.277 May have multiple subsystem ports: No 00:22:52.277 May have multiple controllers: No 00:22:52.277 Associated with SR-IOV VF: No 00:22:52.277 Max Data Transfer Size: Unlimited 00:22:52.277 Max Number of Namespaces: 0 00:22:52.277 Max Number of I/O Queues: 1024 00:22:52.277 NVMe Specification Version (VS): 1.3 00:22:52.277 NVMe Specification Version (Identify): 1.3 00:22:52.277 Maximum Queue Entries: 1024 00:22:52.277 Contiguous Queues Required: No 00:22:52.277 Arbitration Mechanisms Supported 00:22:52.277 Weighted Round Robin: Not Supported 00:22:52.277 Vendor Specific: Not Supported 00:22:52.277 Reset Timeout: 7500 ms 00:22:52.277 Doorbell Stride: 4 bytes 00:22:52.277 NVM Subsystem Reset: Not Supported 00:22:52.277 Command Sets Supported 00:22:52.277 NVM Command Set: Supported 00:22:52.277 Boot Partition: Not Supported 00:22:52.277 Memory Page Size Minimum: 4096 bytes 00:22:52.277 Memory Page Size Maximum: 4096 bytes 00:22:52.277 Persistent Memory Region: Not Supported 00:22:52.277 Optional Asynchronous Events Supported 00:22:52.277 Namespace Attribute Notices: Not Supported 00:22:52.277 Firmware Activation Notices: Not Supported 00:22:52.277 ANA Change Notices: Not Supported 00:22:52.277 PLE Aggregate Log Change Notices: Not Supported 00:22:52.277 LBA Status Info Alert Notices: Not Supported 00:22:52.277 EGE Aggregate Log Change Notices: Not Supported 00:22:52.277 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.277 Zone Descriptor Change Notices: Not Supported 00:22:52.277 Discovery Log Change Notices: Supported 00:22:52.277 Controller Attributes 00:22:52.277 128-bit Host Identifier: Not Supported 00:22:52.277 Non-Operational Permissive Mode: Not Supported 00:22:52.277 NVM Sets: Not Supported 00:22:52.277 Read Recovery Levels: Not Supported 00:22:52.277 Endurance Groups: Not Supported 00:22:52.277 Predictable Latency Mode: Not Supported 00:22:52.277 Traffic Based Keep ALive: Not Supported 00:22:52.277 Namespace Granularity: Not Supported 00:22:52.277 SQ Associations: Not Supported 00:22:52.277 UUID List: Not Supported 00:22:52.277 Multi-Domain Subsystem: Not Supported 00:22:52.277 Fixed Capacity Management: Not Supported 00:22:52.277 Variable Capacity Management: Not Supported 00:22:52.277 Delete Endurance Group: Not Supported 00:22:52.277 Delete NVM Set: Not Supported 00:22:52.277 Extended LBA Formats Supported: Not Supported 00:22:52.277 Flexible Data Placement Supported: Not Supported 00:22:52.277 00:22:52.277 Controller Memory Buffer Support 00:22:52.277 ================================ 00:22:52.277 Supported: No 00:22:52.277 00:22:52.277 Persistent Memory Region Support 00:22:52.277 ================================ 00:22:52.277 Supported: No 00:22:52.277 00:22:52.277 Admin Command Set Attributes 00:22:52.277 ============================ 00:22:52.277 Security Send/Receive: Not Supported 00:22:52.277 Format NVM: Not Supported 00:22:52.277 Firmware Activate/Download: Not Supported 00:22:52.277 Namespace Management: Not Supported 00:22:52.277 Device Self-Test: Not Supported 00:22:52.277 Directives: Not Supported 00:22:52.277 NVMe-MI: Not Supported 00:22:52.277 Virtualization Management: Not Supported 00:22:52.277 Doorbell Buffer Config: Not Supported 00:22:52.277 Get LBA Status Capability: Not Supported 00:22:52.277 Command & Feature Lockdown Capability: Not Supported 00:22:52.277 Abort Command Limit: 1 00:22:52.277 Async Event Request Limit: 1 00:22:52.277 Number of Firmware Slots: N/A 00:22:52.277 Firmware Slot 1 Read-Only: N/A 00:22:52.277 Firmware Activation Without Reset: N/A 00:22:52.277 Multiple Update Detection Support: N/A 00:22:52.277 Firmware Update Granularity: No Information Provided 00:22:52.277 Per-Namespace SMART Log: No 00:22:52.277 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.277 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.277 Command Effects Log Page: Not Supported 00:22:52.277 Get Log Page Extended Data: Supported 00:22:52.277 Telemetry Log Pages: Not Supported 00:22:52.277 Persistent Event Log Pages: Not Supported 00:22:52.277 Supported Log Pages Log Page: May Support 00:22:52.277 Commands Supported & Effects Log Page: Not Supported 00:22:52.277 Feature Identifiers & Effects Log Page:May Support 00:22:52.277 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.277 Data Area 4 for Telemetry Log: Not Supported 00:22:52.277 Error Log Page Entries Supported: 1 00:22:52.277 Keep Alive: Not Supported 00:22:52.277 00:22:52.277 NVM Command Set Attributes 00:22:52.277 ========================== 00:22:52.277 Submission Queue Entry Size 00:22:52.277 Max: 1 00:22:52.278 Min: 1 00:22:52.278 Completion Queue Entry Size 00:22:52.278 Max: 1 00:22:52.278 Min: 1 00:22:52.278 Number of Namespaces: 0 00:22:52.278 Compare Command: Not Supported 00:22:52.278 Write Uncorrectable Command: Not Supported 00:22:52.278 Dataset Management Command: Not Supported 00:22:52.278 Write Zeroes Command: Not Supported 00:22:52.278 Set Features Save Field: Not Supported 00:22:52.278 Reservations: Not Supported 00:22:52.278 Timestamp: Not Supported 00:22:52.278 Copy: Not Supported 00:22:52.278 Volatile Write Cache: Not Present 00:22:52.278 Atomic Write Unit (Normal): 1 00:22:52.278 Atomic Write Unit (PFail): 1 00:22:52.278 Atomic Compare & Write Unit: 1 00:22:52.278 Fused Compare & Write: Not Supported 00:22:52.278 Scatter-Gather List 00:22:52.278 SGL Command Set: Supported 00:22:52.278 SGL Keyed: Not Supported 00:22:52.278 SGL Bit Bucket Descriptor: Not Supported 00:22:52.278 SGL Metadata Pointer: Not Supported 00:22:52.278 Oversized SGL: Not Supported 00:22:52.278 SGL Metadata Address: Not Supported 00:22:52.278 SGL Offset: Supported 00:22:52.278 Transport SGL Data Block: Not Supported 00:22:52.278 Replay Protected Memory Block: Not Supported 00:22:52.278 00:22:52.278 Firmware Slot Information 00:22:52.278 ========================= 00:22:52.278 Active slot: 0 00:22:52.278 00:22:52.278 00:22:52.278 Error Log 00:22:52.278 ========= 00:22:52.278 00:22:52.278 Active Namespaces 00:22:52.278 ================= 00:22:52.278 Discovery Log Page 00:22:52.278 ================== 00:22:52.278 Generation Counter: 2 00:22:52.278 Number of Records: 2 00:22:52.278 Record Format: 0 00:22:52.278 00:22:52.278 Discovery Log Entry 0 00:22:52.278 ---------------------- 00:22:52.278 Transport Type: 3 (TCP) 00:22:52.278 Address Family: 1 (IPv4) 00:22:52.278 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.278 Entry Flags: 00:22:52.278 Duplicate Returned Information: 0 00:22:52.278 Explicit Persistent Connection Support for Discovery: 0 00:22:52.278 Transport Requirements: 00:22:52.278 Secure Channel: Not Specified 00:22:52.278 Port ID: 1 (0x0001) 00:22:52.278 Controller ID: 65535 (0xffff) 00:22:52.278 Admin Max SQ Size: 32 00:22:52.278 Transport Service Identifier: 4420 00:22:52.278 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.278 Transport Address: 10.0.0.1 00:22:52.278 Discovery Log Entry 1 00:22:52.278 ---------------------- 00:22:52.278 Transport Type: 3 (TCP) 00:22:52.278 Address Family: 1 (IPv4) 00:22:52.278 Subsystem Type: 2 (NVM Subsystem) 00:22:52.278 Entry Flags: 00:22:52.278 Duplicate Returned Information: 0 00:22:52.278 Explicit Persistent Connection Support for Discovery: 0 00:22:52.278 Transport Requirements: 00:22:52.278 Secure Channel: Not Specified 00:22:52.278 Port ID: 1 (0x0001) 00:22:52.278 Controller ID: 65535 (0xffff) 00:22:52.278 Admin Max SQ Size: 32 00:22:52.278 Transport Service Identifier: 4420 00:22:52.278 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:52.278 Transport Address: 10.0.0.1 00:22:52.278 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:52.536 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.536 get_feature(0x01) failed 00:22:52.536 get_feature(0x02) failed 00:22:52.536 get_feature(0x04) failed 00:22:52.536 ===================================================== 00:22:52.536 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:52.536 ===================================================== 00:22:52.536 Controller Capabilities/Features 00:22:52.536 ================================ 00:22:52.536 Vendor ID: 0000 00:22:52.536 Subsystem Vendor ID: 0000 00:22:52.536 Serial Number: 603c055e039293299305 00:22:52.536 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:52.536 Firmware Version: 6.7.0-68 00:22:52.536 Recommended Arb Burst: 6 00:22:52.536 IEEE OUI Identifier: 00 00 00 00:22:52.536 Multi-path I/O 00:22:52.536 May have multiple subsystem ports: Yes 00:22:52.536 May have multiple controllers: Yes 00:22:52.536 Associated with SR-IOV VF: No 00:22:52.536 Max Data Transfer Size: Unlimited 00:22:52.536 Max Number of Namespaces: 1024 00:22:52.536 Max Number of I/O Queues: 128 00:22:52.536 NVMe Specification Version (VS): 1.3 00:22:52.536 NVMe Specification Version (Identify): 1.3 00:22:52.536 Maximum Queue Entries: 1024 00:22:52.536 Contiguous Queues Required: No 00:22:52.536 Arbitration Mechanisms Supported 00:22:52.536 Weighted Round Robin: Not Supported 00:22:52.536 Vendor Specific: Not Supported 00:22:52.536 Reset Timeout: 7500 ms 00:22:52.536 Doorbell Stride: 4 bytes 00:22:52.536 NVM Subsystem Reset: Not Supported 00:22:52.536 Command Sets Supported 00:22:52.536 NVM Command Set: Supported 00:22:52.536 Boot Partition: Not Supported 00:22:52.536 Memory Page Size Minimum: 4096 bytes 00:22:52.536 Memory Page Size Maximum: 4096 bytes 00:22:52.536 Persistent Memory Region: Not Supported 00:22:52.536 Optional Asynchronous Events Supported 00:22:52.536 Namespace Attribute Notices: Supported 00:22:52.536 Firmware Activation Notices: Not Supported 00:22:52.536 ANA Change Notices: Supported 00:22:52.536 PLE Aggregate Log Change Notices: Not Supported 00:22:52.536 LBA Status Info Alert Notices: Not Supported 00:22:52.536 EGE Aggregate Log Change Notices: Not Supported 00:22:52.536 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.536 Zone Descriptor Change Notices: Not Supported 00:22:52.536 Discovery Log Change Notices: Not Supported 00:22:52.536 Controller Attributes 00:22:52.536 128-bit Host Identifier: Supported 00:22:52.536 Non-Operational Permissive Mode: Not Supported 00:22:52.536 NVM Sets: Not Supported 00:22:52.536 Read Recovery Levels: Not Supported 00:22:52.536 Endurance Groups: Not Supported 00:22:52.536 Predictable Latency Mode: Not Supported 00:22:52.536 Traffic Based Keep ALive: Supported 00:22:52.536 Namespace Granularity: Not Supported 00:22:52.536 SQ Associations: Not Supported 00:22:52.536 UUID List: Not Supported 00:22:52.536 Multi-Domain Subsystem: Not Supported 00:22:52.536 Fixed Capacity Management: Not Supported 00:22:52.536 Variable Capacity Management: Not Supported 00:22:52.536 Delete Endurance Group: Not Supported 00:22:52.536 Delete NVM Set: Not Supported 00:22:52.536 Extended LBA Formats Supported: Not Supported 00:22:52.536 Flexible Data Placement Supported: Not Supported 00:22:52.536 00:22:52.536 Controller Memory Buffer Support 00:22:52.536 ================================ 00:22:52.536 Supported: No 00:22:52.536 00:22:52.536 Persistent Memory Region Support 00:22:52.536 ================================ 00:22:52.536 Supported: No 00:22:52.537 00:22:52.537 Admin Command Set Attributes 00:22:52.537 ============================ 00:22:52.537 Security Send/Receive: Not Supported 00:22:52.537 Format NVM: Not Supported 00:22:52.537 Firmware Activate/Download: Not Supported 00:22:52.537 Namespace Management: Not Supported 00:22:52.537 Device Self-Test: Not Supported 00:22:52.537 Directives: Not Supported 00:22:52.537 NVMe-MI: Not Supported 00:22:52.537 Virtualization Management: Not Supported 00:22:52.537 Doorbell Buffer Config: Not Supported 00:22:52.537 Get LBA Status Capability: Not Supported 00:22:52.537 Command & Feature Lockdown Capability: Not Supported 00:22:52.537 Abort Command Limit: 4 00:22:52.537 Async Event Request Limit: 4 00:22:52.537 Number of Firmware Slots: N/A 00:22:52.537 Firmware Slot 1 Read-Only: N/A 00:22:52.537 Firmware Activation Without Reset: N/A 00:22:52.537 Multiple Update Detection Support: N/A 00:22:52.537 Firmware Update Granularity: No Information Provided 00:22:52.537 Per-Namespace SMART Log: Yes 00:22:52.537 Asymmetric Namespace Access Log Page: Supported 00:22:52.537 ANA Transition Time : 10 sec 00:22:52.537 00:22:52.537 Asymmetric Namespace Access Capabilities 00:22:52.537 ANA Optimized State : Supported 00:22:52.537 ANA Non-Optimized State : Supported 00:22:52.537 ANA Inaccessible State : Supported 00:22:52.537 ANA Persistent Loss State : Supported 00:22:52.537 ANA Change State : Supported 00:22:52.537 ANAGRPID is not changed : No 00:22:52.537 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:52.537 00:22:52.537 ANA Group Identifier Maximum : 128 00:22:52.537 Number of ANA Group Identifiers : 128 00:22:52.537 Max Number of Allowed Namespaces : 1024 00:22:52.537 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:52.537 Command Effects Log Page: Supported 00:22:52.537 Get Log Page Extended Data: Supported 00:22:52.537 Telemetry Log Pages: Not Supported 00:22:52.537 Persistent Event Log Pages: Not Supported 00:22:52.537 Supported Log Pages Log Page: May Support 00:22:52.537 Commands Supported & Effects Log Page: Not Supported 00:22:52.537 Feature Identifiers & Effects Log Page:May Support 00:22:52.537 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.537 Data Area 4 for Telemetry Log: Not Supported 00:22:52.537 Error Log Page Entries Supported: 128 00:22:52.537 Keep Alive: Supported 00:22:52.537 Keep Alive Granularity: 1000 ms 00:22:52.537 00:22:52.537 NVM Command Set Attributes 00:22:52.537 ========================== 00:22:52.537 Submission Queue Entry Size 00:22:52.537 Max: 64 00:22:52.537 Min: 64 00:22:52.537 Completion Queue Entry Size 00:22:52.537 Max: 16 00:22:52.537 Min: 16 00:22:52.537 Number of Namespaces: 1024 00:22:52.537 Compare Command: Not Supported 00:22:52.537 Write Uncorrectable Command: Not Supported 00:22:52.537 Dataset Management Command: Supported 00:22:52.537 Write Zeroes Command: Supported 00:22:52.537 Set Features Save Field: Not Supported 00:22:52.537 Reservations: Not Supported 00:22:52.537 Timestamp: Not Supported 00:22:52.537 Copy: Not Supported 00:22:52.537 Volatile Write Cache: Present 00:22:52.537 Atomic Write Unit (Normal): 1 00:22:52.537 Atomic Write Unit (PFail): 1 00:22:52.537 Atomic Compare & Write Unit: 1 00:22:52.537 Fused Compare & Write: Not Supported 00:22:52.537 Scatter-Gather List 00:22:52.537 SGL Command Set: Supported 00:22:52.537 SGL Keyed: Not Supported 00:22:52.537 SGL Bit Bucket Descriptor: Not Supported 00:22:52.537 SGL Metadata Pointer: Not Supported 00:22:52.537 Oversized SGL: Not Supported 00:22:52.537 SGL Metadata Address: Not Supported 00:22:52.537 SGL Offset: Supported 00:22:52.537 Transport SGL Data Block: Not Supported 00:22:52.537 Replay Protected Memory Block: Not Supported 00:22:52.537 00:22:52.537 Firmware Slot Information 00:22:52.537 ========================= 00:22:52.537 Active slot: 0 00:22:52.537 00:22:52.537 Asymmetric Namespace Access 00:22:52.537 =========================== 00:22:52.537 Change Count : 0 00:22:52.537 Number of ANA Group Descriptors : 1 00:22:52.537 ANA Group Descriptor : 0 00:22:52.537 ANA Group ID : 1 00:22:52.537 Number of NSID Values : 1 00:22:52.537 Change Count : 0 00:22:52.537 ANA State : 1 00:22:52.537 Namespace Identifier : 1 00:22:52.537 00:22:52.537 Commands Supported and Effects 00:22:52.537 ============================== 00:22:52.537 Admin Commands 00:22:52.537 -------------- 00:22:52.537 Get Log Page (02h): Supported 00:22:52.537 Identify (06h): Supported 00:22:52.537 Abort (08h): Supported 00:22:52.537 Set Features (09h): Supported 00:22:52.537 Get Features (0Ah): Supported 00:22:52.537 Asynchronous Event Request (0Ch): Supported 00:22:52.537 Keep Alive (18h): Supported 00:22:52.537 I/O Commands 00:22:52.537 ------------ 00:22:52.537 Flush (00h): Supported 00:22:52.537 Write (01h): Supported LBA-Change 00:22:52.537 Read (02h): Supported 00:22:52.537 Write Zeroes (08h): Supported LBA-Change 00:22:52.537 Dataset Management (09h): Supported 00:22:52.537 00:22:52.537 Error Log 00:22:52.537 ========= 00:22:52.537 Entry: 0 00:22:52.537 Error Count: 0x3 00:22:52.537 Submission Queue Id: 0x0 00:22:52.537 Command Id: 0x5 00:22:52.537 Phase Bit: 0 00:22:52.537 Status Code: 0x2 00:22:52.537 Status Code Type: 0x0 00:22:52.537 Do Not Retry: 1 00:22:52.537 Error Location: 0x28 00:22:52.537 LBA: 0x0 00:22:52.537 Namespace: 0x0 00:22:52.537 Vendor Log Page: 0x0 00:22:52.537 ----------- 00:22:52.537 Entry: 1 00:22:52.537 Error Count: 0x2 00:22:52.537 Submission Queue Id: 0x0 00:22:52.537 Command Id: 0x5 00:22:52.537 Phase Bit: 0 00:22:52.537 Status Code: 0x2 00:22:52.537 Status Code Type: 0x0 00:22:52.537 Do Not Retry: 1 00:22:52.537 Error Location: 0x28 00:22:52.537 LBA: 0x0 00:22:52.537 Namespace: 0x0 00:22:52.537 Vendor Log Page: 0x0 00:22:52.537 ----------- 00:22:52.537 Entry: 2 00:22:52.537 Error Count: 0x1 00:22:52.537 Submission Queue Id: 0x0 00:22:52.537 Command Id: 0x4 00:22:52.537 Phase Bit: 0 00:22:52.537 Status Code: 0x2 00:22:52.537 Status Code Type: 0x0 00:22:52.537 Do Not Retry: 1 00:22:52.537 Error Location: 0x28 00:22:52.537 LBA: 0x0 00:22:52.537 Namespace: 0x0 00:22:52.537 Vendor Log Page: 0x0 00:22:52.537 00:22:52.537 Number of Queues 00:22:52.537 ================ 00:22:52.537 Number of I/O Submission Queues: 128 00:22:52.537 Number of I/O Completion Queues: 128 00:22:52.537 00:22:52.537 ZNS Specific Controller Data 00:22:52.537 ============================ 00:22:52.537 Zone Append Size Limit: 0 00:22:52.537 00:22:52.537 00:22:52.537 Active Namespaces 00:22:52.537 ================= 00:22:52.537 get_feature(0x05) failed 00:22:52.537 Namespace ID:1 00:22:52.537 Command Set Identifier: NVM (00h) 00:22:52.537 Deallocate: Supported 00:22:52.538 Deallocated/Unwritten Error: Not Supported 00:22:52.538 Deallocated Read Value: Unknown 00:22:52.538 Deallocate in Write Zeroes: Not Supported 00:22:52.538 Deallocated Guard Field: 0xFFFF 00:22:52.538 Flush: Supported 00:22:52.538 Reservation: Not Supported 00:22:52.538 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.538 Size (in LBAs): 1953525168 (931GiB) 00:22:52.538 Capacity (in LBAs): 1953525168 (931GiB) 00:22:52.538 Utilization (in LBAs): 1953525168 (931GiB) 00:22:52.538 UUID: 0b65ff26-c65a-423d-9276-1ee8caab8d04 00:22:52.538 Thin Provisioning: Not Supported 00:22:52.538 Per-NS Atomic Units: Yes 00:22:52.538 Atomic Boundary Size (Normal): 0 00:22:52.538 Atomic Boundary Size (PFail): 0 00:22:52.538 Atomic Boundary Offset: 0 00:22:52.538 NGUID/EUI64 Never Reused: No 00:22:52.538 ANA group ID: 1 00:22:52.538 Namespace Write Protected: No 00:22:52.538 Number of LBA Formats: 1 00:22:52.538 Current LBA Format: LBA Format #00 00:22:52.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.538 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.538 rmmod nvme_tcp 00:22:52.538 rmmod nvme_fabrics 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.538 10:29:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.445 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.445 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:54.445 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:54.445 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:54.704 10:29:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:55.642 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:55.642 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:55.642 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:56.578 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:22:56.578 00:22:56.578 real 0m8.691s 00:22:56.578 user 0m1.853s 00:22:56.578 sys 0m2.979s 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.578 ************************************ 00:22:56.578 END TEST nvmf_identify_kernel_target 00:22:56.578 ************************************ 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:56.578 10:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.836 ************************************ 00:22:56.836 START TEST nvmf_auth_host 00:22:56.836 ************************************ 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:56.836 * Looking for test storage... 00:22:56.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.836 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.837 10:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:58.739 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:58.739 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:58.739 Found net devices under 0000:08:00.0: cvl_0_0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:58.739 Found net devices under 0000:08:00.1: cvl_0_1 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:22:58.739 00:22:58.739 --- 10.0.0.2 ping statistics --- 00:22:58.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.739 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:22:58.739 00:22:58.739 --- 10.0.0.1 ping statistics --- 00:22:58.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.739 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1569712 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1569712 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1569712 ']' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.739 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7d70c267a562cc74edcf735c67d146f3 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OnL 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7d70c267a562cc74edcf735c67d146f3 0 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7d70c267a562cc74edcf735c67d146f3 0 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7d70c267a562cc74edcf735c67d146f3 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.998 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OnL 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OnL 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.OnL 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e841516460f7e9c7dc21a16add2a6287fcca9f47440dc6d134ac029b6498822c 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RXD 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e841516460f7e9c7dc21a16add2a6287fcca9f47440dc6d134ac029b6498822c 3 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e841516460f7e9c7dc21a16add2a6287fcca9f47440dc6d134ac029b6498822c 3 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e841516460f7e9c7dc21a16add2a6287fcca9f47440dc6d134ac029b6498822c 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RXD 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RXD 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RXD 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e82ded47c7a9f34e9bf5678a2e79cd8770a32bca1ad2b2f3 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.n8G 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e82ded47c7a9f34e9bf5678a2e79cd8770a32bca1ad2b2f3 0 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e82ded47c7a9f34e9bf5678a2e79cd8770a32bca1ad2b2f3 0 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e82ded47c7a9f34e9bf5678a2e79cd8770a32bca1ad2b2f3 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.n8G 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.n8G 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.n8G 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7788d1b2793957fc66110f2557aa65fa59dadd71c2c0daaf 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y8y 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7788d1b2793957fc66110f2557aa65fa59dadd71c2c0daaf 2 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7788d1b2793957fc66110f2557aa65fa59dadd71c2c0daaf 2 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7788d1b2793957fc66110f2557aa65fa59dadd71c2c0daaf 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y8y 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y8y 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Y8y 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bb0878ce724e46c59b1850c7c0147c0 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ztI 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bb0878ce724e46c59b1850c7c0147c0 1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bb0878ce724e46c59b1850c7c0147c0 1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bb0878ce724e46c59b1850c7c0147c0 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:58.999 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ztI 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ztI 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ztI 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=468f21c3dd05dd2ac2e57d697d374420 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qdz 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 468f21c3dd05dd2ac2e57d697d374420 1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 468f21c3dd05dd2ac2e57d697d374420 1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=468f21c3dd05dd2ac2e57d697d374420 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qdz 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qdz 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Qdz 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=76d56597a61d77e8743692817da557d223ef181d4b24239b 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iQu 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 76d56597a61d77e8743692817da557d223ef181d4b24239b 2 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 76d56597a61d77e8743692817da557d223ef181d4b24239b 2 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=76d56597a61d77e8743692817da557d223ef181d4b24239b 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iQu 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iQu 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.iQu 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=021fdb50f2883365f209cf731936f9c3 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.K2T 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 021fdb50f2883365f209cf731936f9c3 0 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 021fdb50f2883365f209cf731936f9c3 0 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=021fdb50f2883365f209cf731936f9c3 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.K2T 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.K2T 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.K2T 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cac7e8956737502809a7a4918faecbe4621916e79c5c90232997e42036a0cf3c 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TGt 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cac7e8956737502809a7a4918faecbe4621916e79c5c90232997e42036a0cf3c 3 00:22:59.258 10:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cac7e8956737502809a7a4918faecbe4621916e79c5c90232997e42036a0cf3c 3 00:22:59.258 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.258 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:59.258 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cac7e8956737502809a7a4918faecbe4621916e79c5c90232997e42036a0cf3c 00:22:59.258 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:59.258 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TGt 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TGt 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TGt 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1569712 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1569712 ']' 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.516 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OnL 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RXD ]] 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RXD 00:22:59.776 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.n8G 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Y8y ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y8y 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ztI 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Qdz ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qdz 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.iQu 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.K2T ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.K2T 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TGt 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:59.777 10:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:00.711 Waiting for block devices as requested 00:23:00.711 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:23:00.711 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:00.969 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:00.969 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:00.969 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:00.969 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:01.228 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:01.228 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:01.228 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:01.228 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:01.488 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:01.488 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:01.488 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:01.747 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:01.747 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:01.747 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:01.747 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:02.311 No valid GPT data, bailing 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.311 10:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:02.311 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:02.311 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:02.311 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:02.311 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:02.311 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:02.312 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:23:02.570 00:23:02.570 Discovery Log Number of Records 2, Generation counter 2 00:23:02.570 =====Discovery Log Entry 0====== 00:23:02.570 trtype: tcp 00:23:02.570 adrfam: ipv4 00:23:02.570 subtype: current discovery subsystem 00:23:02.570 treq: not specified, sq flow control disable supported 00:23:02.570 portid: 1 00:23:02.570 trsvcid: 4420 00:23:02.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:02.570 traddr: 10.0.0.1 00:23:02.570 eflags: none 00:23:02.570 sectype: none 00:23:02.570 =====Discovery Log Entry 1====== 00:23:02.570 trtype: tcp 00:23:02.570 adrfam: ipv4 00:23:02.570 subtype: nvme subsystem 00:23:02.570 treq: not specified, sq flow control disable supported 00:23:02.570 portid: 1 00:23:02.570 trsvcid: 4420 00:23:02.570 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:02.570 traddr: 10.0.0.1 00:23:02.570 eflags: none 00:23:02.570 sectype: none 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:02.570 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.571 nvme0n1 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.571 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.831 nvme0n1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.831 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.092 nvme0n1 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:03.092 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.093 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 nvme0n1 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 10:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.354 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.613 nvme0n1 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.613 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.871 nvme0n1 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.871 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.872 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.130 nvme0n1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.130 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.389 nvme0n1 00:23:04.389 10:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.389 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.390 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.650 nvme0n1 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.650 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.651 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.911 nvme0n1 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.911 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.912 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.172 nvme0n1 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.172 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.173 10:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.739 nvme0n1 00:23:05.739 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.739 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.739 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.739 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.739 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.740 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.000 nvme0n1 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.000 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 nvme0n1 00:23:06.260 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.260 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.260 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.260 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.260 10:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.260 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.260 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.260 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.260 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.520 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 nvme0n1 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.040 nvme0n1 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:07.040 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.041 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.298 10:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.868 nvme0n1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.869 10:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.438 nvme0n1 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.438 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.696 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.697 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 nvme0n1 00:23:09.268 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.268 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.268 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.269 10:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.860 nvme0n1 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.860 10:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.800 nvme0n1 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.800 10:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.736 nvme0n1 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.736 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.737 10:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.114 nvme0n1 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.114 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.115 10:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.497 nvme0n1 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.497 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.498 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.498 10:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.434 nvme0n1 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.434 10:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.815 nvme0n1 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.815 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 nvme0n1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.816 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.075 nvme0n1 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.075 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.076 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.076 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.076 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.076 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.336 nvme0n1 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.336 10:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.336 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.597 nvme0n1 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.597 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.858 nvme0n1 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.858 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.118 nvme0n1 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.118 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.119 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.379 nvme0n1 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.379 10:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.379 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.639 nvme0n1 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.639 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.899 nvme0n1 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.899 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.900 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.159 nvme0n1 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.159 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.160 10:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.420 nvme0n1 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.420 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.681 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.940 nvme0n1 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.940 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.941 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.199 nvme0n1 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.199 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.457 10:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.457 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.458 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.458 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.718 nvme0n1 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.718 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.978 nvme0n1 00:23:20.978 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.979 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.239 10:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 nvme0n1 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.809 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.810 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.810 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.810 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.810 10:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 nvme0n1 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.380 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.316 nvme0n1 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.316 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.317 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.317 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.317 10:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.884 nvme0n1 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:23.884 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.885 10:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.450 nvme0n1 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 10:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.828 nvme0n1 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.828 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.829 10:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.766 nvme0n1 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.766 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:27.023 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.024 10:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.961 nvme0n1 00:23:27.961 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.961 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.961 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.961 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.961 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.219 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.220 10:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.154 nvme0n1 00:23:29.154 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.154 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.154 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.154 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.154 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.414 10:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.414 10:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.414 10:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.351 nvme0n1 00:23:30.351 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.351 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.351 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.351 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.351 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.612 nvme0n1 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.612 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 nvme0n1 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.873 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.134 nvme0n1 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.134 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.135 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.135 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.135 10:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.395 nvme0n1 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.395 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.656 nvme0n1 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.656 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 nvme0n1 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.917 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.918 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.177 nvme0n1 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.177 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.178 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.178 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.178 10:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.438 nvme0n1 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.438 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.439 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.700 nvme0n1 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.700 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 nvme0n1 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:32.960 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.961 10:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.530 nvme0n1 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.530 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.531 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.814 nvme0n1 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.814 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.092 nvme0n1 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.092 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.351 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.352 10:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.611 nvme0n1 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.611 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.871 nvme0n1 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.871 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.131 10:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.698 nvme0n1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.698 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.269 nvme0n1 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.269 10:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.269 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.269 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.269 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.269 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.527 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.528 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.528 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.528 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.093 nvme0n1 00:23:37.093 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.093 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.094 10:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.663 nvme0n1 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.663 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 10:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 nvme0n1 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MGMyNjdhNTYyY2M3NGVkY2Y3MzVjNjdkMTQ2ZjNuzC28: 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg0MTUxNjQ2MGY3ZTljN2RjMjFhMTZhZGQyYTYyODdmY2NhOWY0NzQ0MGRjNmQxMzRhYzAyOWI2NDk4ODIyY+Njzzk=: 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.600 10:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.536 nvme0n1 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.536 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.794 10:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.728 nvme0n1 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:40.728 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDg3OGNlNzI0ZTQ2YzU5YjE4NTBjN2MwMTQ3YzBLbbV+: 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: ]] 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY4ZjIxYzNkZDA1ZGQyYWMyZTU3ZDY5N2QzNzQ0MjD4qBWO: 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.729 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.989 10:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.928 nvme0n1 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZkNTY1OTdhNjFkNzdlODc0MzY5MjgxN2RhNTU3ZDIyM2VmMTgxZDRiMjQyMzlirNVl0Q==: 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIxZmRiNTBmMjg4MzM2NWYyMDljZjczMTkzNmY5YzMyQ94f: 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.928 10:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 nvme0n1 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2FjN2U4OTU2NzM3NTAyODA5YTdhNDkxOGZhZWNiZTQ2MjE5MTZlNzljNWM5MDIzMjk5N2U0MjAzNmEwY2YzYywln+Y=: 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.309 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.310 10:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.248 nvme0n1 00:23:44.248 10:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.248 10:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.248 10:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.248 10:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.248 10:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.248 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.506 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTgyZGVkNDdjN2E5ZjM0ZTliZjU2NzhhMmU3OWNkODc3MGEzMmJjYTFhZDJiMmYzODTCFg==: 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc4OGQxYjI3OTM5NTdmYzY2MTEwZjI1NTdhYTY1ZmE1OWRhZGQ3MWMyYzBkYWFmdCc3Vg==: 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.507 request: 00:23:44.507 { 00:23:44.507 "name": "nvme0", 00:23:44.507 "trtype": "tcp", 00:23:44.507 "traddr": "10.0.0.1", 00:23:44.507 "adrfam": "ipv4", 00:23:44.507 "trsvcid": "4420", 00:23:44.507 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:44.507 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:44.507 "prchk_reftag": false, 00:23:44.507 "prchk_guard": false, 00:23:44.507 "hdgst": false, 00:23:44.507 "ddgst": false, 00:23:44.507 "method": "bdev_nvme_attach_controller", 00:23:44.507 "req_id": 1 00:23:44.507 } 00:23:44.507 Got JSON-RPC error response 00:23:44.507 response: 00:23:44.507 { 00:23:44.507 "code": -5, 00:23:44.507 "message": "Input/output error" 00:23:44.507 } 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.507 request: 00:23:44.507 { 00:23:44.507 "name": "nvme0", 00:23:44.507 "trtype": "tcp", 00:23:44.507 "traddr": "10.0.0.1", 00:23:44.507 "adrfam": "ipv4", 00:23:44.507 "trsvcid": "4420", 00:23:44.507 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:44.507 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:44.507 "prchk_reftag": false, 00:23:44.507 "prchk_guard": false, 00:23:44.507 "hdgst": false, 00:23:44.507 "ddgst": false, 00:23:44.507 "dhchap_key": "key2", 00:23:44.507 "method": "bdev_nvme_attach_controller", 00:23:44.507 "req_id": 1 00:23:44.507 } 00:23:44.507 Got JSON-RPC error response 00:23:44.507 response: 00:23:44.507 { 00:23:44.507 "code": -5, 00:23:44.507 "message": "Input/output error" 00:23:44.507 } 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:44.507 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.768 request: 00:23:44.768 { 00:23:44.768 "name": "nvme0", 00:23:44.768 "trtype": "tcp", 00:23:44.768 "traddr": "10.0.0.1", 00:23:44.768 "adrfam": "ipv4", 00:23:44.768 "trsvcid": "4420", 00:23:44.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:44.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:44.768 "prchk_reftag": false, 00:23:44.768 "prchk_guard": false, 00:23:44.768 "hdgst": false, 00:23:44.768 "ddgst": false, 00:23:44.768 "dhchap_key": "key1", 00:23:44.768 "dhchap_ctrlr_key": "ckey2", 00:23:44.768 "method": "bdev_nvme_attach_controller", 00:23:44.768 "req_id": 1 00:23:44.768 } 00:23:44.768 Got JSON-RPC error response 00:23:44.768 response: 00:23:44.768 { 00:23:44.768 "code": -5, 00:23:44.768 "message": "Input/output error" 00:23:44.768 } 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.768 rmmod nvme_tcp 00:23:44.768 rmmod nvme_fabrics 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1569712 ']' 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1569712 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1569712 ']' 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1569712 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569712 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569712' 00:23:44.768 killing process with pid 1569712 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1569712 00:23:44.768 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1569712 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.028 10:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:47.571 10:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:48.138 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:48.138 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:48.138 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:48.397 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:49.335 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:49.335 10:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.OnL /tmp/spdk.key-null.n8G /tmp/spdk.key-sha256.ztI /tmp/spdk.key-sha384.iQu /tmp/spdk.key-sha512.TGt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:49.335 10:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:50.270 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:50.270 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:50.270 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:50.270 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:50.270 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:50.270 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:50.270 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:50.270 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:50.270 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:50.270 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:50.270 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:50.270 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:50.270 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:50.270 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:50.270 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:50.270 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:50.270 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:50.270 00:23:50.270 real 0m53.450s 00:23:50.270 user 0m51.241s 00:23:50.270 sys 0m5.306s 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.270 ************************************ 00:23:50.270 END TEST nvmf_auth_host 00:23:50.270 ************************************ 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.270 ************************************ 00:23:50.270 START TEST nvmf_digest 00:23:50.270 ************************************ 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:50.270 * Looking for test storage... 00:23:50.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.270 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.271 10:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:52.177 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:52.177 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:52.177 Found net devices under 0000:08:00.0: cvl_0_0 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:52.177 Found net devices under 0000:08:00.1: cvl_0_1 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.177 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:23:52.178 00:23:52.178 --- 10.0.0.2 ping statistics --- 00:23:52.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.178 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:23:52.178 00:23:52.178 --- 10.0.0.1 ping statistics --- 00:23:52.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.178 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 ************************************ 00:23:52.178 START TEST nvmf_digest_clean 00:23:52.178 ************************************ 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1578274 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1578274 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1578274 ']' 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.178 10:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 [2024-07-25 10:30:41.758538] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:23:52.178 [2024-07-25 10:30:41.758634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.178 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.178 [2024-07-25 10:30:41.841472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.437 [2024-07-25 10:30:41.995620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.437 [2024-07-25 10:30:41.995698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.437 [2024-07-25 10:30:41.995729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.437 [2024-07-25 10:30:41.995755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.437 [2024-07-25 10:30:41.995778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.437 [2024-07-25 10:30:41.995825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.437 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.695 null0 00:23:52.695 [2024-07-25 10:30:42.235282] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.695 [2024-07-25 10:30:42.259515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1578365 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1578365 /var/tmp/bperf.sock 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1578365 ']' 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.695 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.695 [2024-07-25 10:30:42.312156] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:23:52.695 [2024-07-25 10:30:42.312253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578365 ] 00:23:52.695 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.695 [2024-07-25 10:30:42.373517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.954 [2024-07-25 10:30:42.494062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.954 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.954 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:52.954 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:52.954 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:52.954 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:53.212 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.212 10:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.793 nvme0n1 00:23:53.793 10:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:53.793 10:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.055 Running I/O for 2 seconds... 00:23:55.955 00:23:55.955 Latency(us) 00:23:55.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:55.955 nvme0n1 : 2.01 17643.31 68.92 0.00 0.00 7243.89 4223.43 13689.74 00:23:55.955 =================================================================================================================== 00:23:55.955 Total : 17643.31 68.92 0.00 0.00 7243.89 4223.43 13689.74 00:23:55.955 0 00:23:55.955 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:55.955 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:55.955 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:55.955 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:55.955 | select(.opcode=="crc32c") 00:23:55.955 | "\(.module_name) \(.executed)"' 00:23:55.955 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1578365 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1578365 ']' 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1578365 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1578365 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1578365' 00:23:56.214 killing process with pid 1578365 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1578365 00:23:56.214 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.214 00:23:56.214 Latency(us) 00:23:56.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.214 =================================================================================================================== 00:23:56.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.214 10:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1578365 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1578696 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1578696 /var/tmp/bperf.sock 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1578696 ']' 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.473 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.473 [2024-07-25 10:30:46.208751] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:23:56.473 [2024-07-25 10:30:46.208846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578696 ] 00:23:56.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:56.473 Zero copy mechanism will not be used. 00:23:56.473 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.731 [2024-07-25 10:30:46.270650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.731 [2024-07-25 10:30:46.390162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.731 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.731 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:56.731 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:56.731 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:56.731 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:57.297 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.298 10:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.556 nvme0n1 00:23:57.556 10:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:57.556 10:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:57.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.556 Zero copy mechanism will not be used. 00:23:57.556 Running I/O for 2 seconds... 00:24:00.086 00:24:00.086 Latency(us) 00:24:00.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.086 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:00.086 nvme0n1 : 2.00 4292.33 536.54 0.00 0.00 3722.54 743.35 11359.57 00:24:00.086 =================================================================================================================== 00:24:00.086 Total : 4292.33 536.54 0.00 0.00 3722.54 743.35 11359.57 00:24:00.086 0 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:00.086 | select(.opcode=="crc32c") 00:24:00.086 | "\(.module_name) \(.executed)"' 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1578696 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1578696 ']' 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1578696 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1578696 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1578696' 00:24:00.086 killing process with pid 1578696 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1578696 00:24:00.086 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.086 00:24:00.086 Latency(us) 00:24:00.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.086 =================================================================================================================== 00:24:00.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1578696 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1579014 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1579014 /var/tmp/bperf.sock 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1579014 ']' 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.086 10:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.345 [2024-07-25 10:30:49.885459] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:00.345 [2024-07-25 10:30:49.885573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579014 ] 00:24:00.345 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.345 [2024-07-25 10:30:49.947605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.345 [2024-07-25 10:30:50.064615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.604 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.604 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:00.604 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:00.604 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:00.604 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:00.889 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.889 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.170 nvme0n1 00:24:01.170 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:01.170 10:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:01.429 Running I/O for 2 seconds... 00:24:03.330 00:24:03.330 Latency(us) 00:24:03.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.330 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:03.330 nvme0n1 : 2.01 18393.89 71.85 0.00 0.00 6941.14 3495.25 12524.66 00:24:03.330 =================================================================================================================== 00:24:03.330 Total : 18393.89 71.85 0.00 0.00 6941.14 3495.25 12524.66 00:24:03.330 0 00:24:03.330 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:03.330 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:03.330 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:03.330 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:03.330 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:03.330 | select(.opcode=="crc32c") 00:24:03.330 | "\(.module_name) \(.executed)"' 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1579014 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1579014 ']' 00:24:03.589 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1579014 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579014 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579014' 00:24:03.847 killing process with pid 1579014 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1579014 00:24:03.847 Received shutdown signal, test time was about 2.000000 seconds 00:24:03.847 00:24:03.847 Latency(us) 00:24:03.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.847 =================================================================================================================== 00:24:03.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1579014 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1579418 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1579418 /var/tmp/bperf.sock 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1579418 ']' 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:03.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.847 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.104 [2024-07-25 10:30:53.658858] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:04.104 [2024-07-25 10:30:53.658957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579418 ] 00:24:04.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.105 Zero copy mechanism will not be used. 00:24:04.105 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.105 [2024-07-25 10:30:53.720129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.105 [2024-07-25 10:30:53.836696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.363 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.363 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:04.363 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:04.363 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:04.363 10:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:04.621 10:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.621 10:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.186 nvme0n1 00:24:05.187 10:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:05.187 10:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:05.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.444 Zero copy mechanism will not be used. 00:24:05.444 Running I/O for 2 seconds... 00:24:07.342 00:24:07.342 Latency(us) 00:24:07.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:07.342 nvme0n1 : 2.00 4050.54 506.32 0.00 0.00 3941.07 3058.35 14757.74 00:24:07.342 =================================================================================================================== 00:24:07.342 Total : 4050.54 506.32 0.00 0.00 3941.07 3058.35 14757.74 00:24:07.342 0 00:24:07.342 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:07.343 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:07.343 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:07.343 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:07.343 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:07.343 | select(.opcode=="crc32c") 00:24:07.343 | "\(.module_name) \(.executed)"' 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1579418 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1579418 ']' 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1579418 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579418 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579418' 00:24:07.601 killing process with pid 1579418 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1579418 00:24:07.601 Received shutdown signal, test time was about 2.000000 seconds 00:24:07.601 00:24:07.601 Latency(us) 00:24:07.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.601 =================================================================================================================== 00:24:07.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.601 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1579418 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1578274 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1578274 ']' 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1578274 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1578274 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1578274' 00:24:07.860 killing process with pid 1578274 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1578274 00:24:07.860 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1578274 00:24:08.119 00:24:08.119 real 0m16.103s 00:24:08.119 user 0m32.537s 00:24:08.119 sys 0m4.061s 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.119 ************************************ 00:24:08.119 END TEST nvmf_digest_clean 00:24:08.119 ************************************ 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.119 ************************************ 00:24:08.119 START TEST nvmf_digest_error 00:24:08.119 ************************************ 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1579844 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1579844 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1579844 ']' 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.119 10:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.377 [2024-07-25 10:30:57.917540] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:08.377 [2024-07-25 10:30:57.917636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.377 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.377 [2024-07-25 10:30:57.982197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.377 [2024-07-25 10:30:58.097341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.377 [2024-07-25 10:30:58.097410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.377 [2024-07-25 10:30:58.097427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.377 [2024-07-25 10:30:58.097441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.377 [2024-07-25 10:30:58.097453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.377 [2024-07-25 10:30:58.097491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 [2024-07-25 10:30:58.198178] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 null0 00:24:08.636 [2024-07-25 10:30:58.303062] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.636 [2024-07-25 10:30:58.327281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1579871 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1579871 /var/tmp/bperf.sock 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1579871 ']' 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.636 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 [2024-07-25 10:30:58.379237] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:08.636 [2024-07-25 10:30:58.379335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579871 ] 00:24:08.636 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.895 [2024-07-25 10:30:58.440295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.895 [2024-07-25 10:30:58.556954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.895 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.895 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:08.895 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:08.895 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.461 10:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.719 nvme0n1 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.719 10:30:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.978 Running I/O for 2 seconds... 00:24:09.978 [2024-07-25 10:30:59.596873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.596930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.596951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.614935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.614974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.629382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.629418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.629438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.646792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.646829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.646858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.664088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.664124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.664145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.677219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.677275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.695044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.695079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.695099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.709315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.709357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.709376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.726644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.726680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.726700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.978 [2024-07-25 10:30:59.745608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:09.978 [2024-07-25 10:30:59.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.978 [2024-07-25 10:30:59.745664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.760132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.760172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.760199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.773846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.773881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.773909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.786763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.786820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.803967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.804007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.821276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.821313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.821332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.835166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.835202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.835221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.853550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.853615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.870562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.870614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.870639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.884422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.884458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.884478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.899980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.900016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.900035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.913383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.913448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.929603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.929642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.929661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.942925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.942964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.942984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.957686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.957725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.957745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.971201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.971241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.971261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:30:59.988551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:30:59.988589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:30:59.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.237 [2024-07-25 10:31:00.001319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.237 [2024-07-25 10:31:00.001356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.237 [2024-07-25 10:31:00.001375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.017935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.018000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.035456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.035504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.035544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.048254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.048291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.048311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.063265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.063302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.078678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.078735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.093680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.093737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.109237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.109275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.121950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.121986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.122005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.139542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.139581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.139601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.155471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.155518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.496 [2024-07-25 10:31:00.155538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.496 [2024-07-25 10:31:00.168407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.496 [2024-07-25 10:31:00.168455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.168475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.184654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.184691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.184710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.197920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.197957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.197976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.216151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.216189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.216208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.232042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.232081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.232101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.246061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.246118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.497 [2024-07-25 10:31:00.260832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.497 [2024-07-25 10:31:00.260872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.497 [2024-07-25 10:31:00.260892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.755 [2024-07-25 10:31:00.276786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.755 [2024-07-25 10:31:00.276829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.276853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.290185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.290240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.306973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.307011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.307031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.321163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.321200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.321219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.339404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.339442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.339461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.354673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.354710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.354729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.367650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.367687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.367707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.383405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.383463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.396922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.396959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.396985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.411998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.412042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.412064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.426870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.426908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.440370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.440413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.440444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.455123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.455159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.455179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.470998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.471034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.471053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.483071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.483107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.501323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.501362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.501381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.756 [2024-07-25 10:31:00.521187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:10.756 [2024-07-25 10:31:00.521233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.756 [2024-07-25 10:31:00.521255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.534522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.534559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.534579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.549878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.549922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.549944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.567089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.567139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.567160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.582152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.582194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.582216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.596417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.596473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.609080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.609116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.609135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.624974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.625014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.625034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.639873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.639909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.639929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.653729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.653765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.653785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.671096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.671132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.671152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.689754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.689790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.702242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.702277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.702296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.720398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.720435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.720454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.737508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.737543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.737562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.750756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.015 [2024-07-25 10:31:00.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.015 [2024-07-25 10:31:00.768245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.015 [2024-07-25 10:31:00.768279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.016 [2024-07-25 10:31:00.768299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.016 [2024-07-25 10:31:00.787563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.016 [2024-07-25 10:31:00.787598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.016 [2024-07-25 10:31:00.787617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.807580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.807615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.807634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.824806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.824843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.824869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.839085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.839121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.839147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.857120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.857155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.857174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.869014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.869074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.886036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.886072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.886091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.900442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.900477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.900506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.915893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.915929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.915948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.928631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.928686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.944100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.944136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.944155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.958201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.958238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.274 [2024-07-25 10:31:00.958258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.274 [2024-07-25 10:31:00.972424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.274 [2024-07-25 10:31:00.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:00.972478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.275 [2024-07-25 10:31:00.986525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.275 [2024-07-25 10:31:00.986559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:00.986579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.275 [2024-07-25 10:31:01.000817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.275 [2024-07-25 10:31:01.000855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:01.000874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.275 [2024-07-25 10:31:01.015016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.275 [2024-07-25 10:31:01.015053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:01.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.275 [2024-07-25 10:31:01.029228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.275 [2024-07-25 10:31:01.029263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:01.029282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.275 [2024-07-25 10:31:01.043405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.275 [2024-07-25 10:31:01.043440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.275 [2024-07-25 10:31:01.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.058059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.058092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.058112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.072264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.072298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.086447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.086490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.086519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.100845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.100879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.100899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.117827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.117862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.132780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.132820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.132845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.147837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.147873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.147894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.164552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.164605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.186157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.186205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.186223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.199590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.199624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.199643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.215655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.533 [2024-07-25 10:31:01.215689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.533 [2024-07-25 10:31:01.215708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.533 [2024-07-25 10:31:01.230341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.230388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.230409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.534 [2024-07-25 10:31:01.244291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.244326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.244345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.534 [2024-07-25 10:31:01.263264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.263298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.263318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.534 [2024-07-25 10:31:01.275093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.275127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.275146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.534 [2024-07-25 10:31:01.291924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.291958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.291978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.534 [2024-07-25 10:31:01.305891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.534 [2024-07-25 10:31:01.305925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.534 [2024-07-25 10:31:01.305944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.321296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.321330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.321349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.336243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.336277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.336296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.353154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.353188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.353208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.366692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.366726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.366745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.383951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.383985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.384003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.401156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.401190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.401209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.414163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.414196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.414215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.432626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.432679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.450410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.450445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.450464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.463749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.463783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.463802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.482603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.482652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.482673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.499070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.499105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.499140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.512377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.512411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.791 [2024-07-25 10:31:01.512430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.791 [2024-07-25 10:31:01.530675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.791 [2024-07-25 10:31:01.530713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.792 [2024-07-25 10:31:01.530732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.792 [2024-07-25 10:31:01.543199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.792 [2024-07-25 10:31:01.543232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.792 [2024-07-25 10:31:01.543251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.792 [2024-07-25 10:31:01.558768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:11.792 [2024-07-25 10:31:01.558802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.792 [2024-07-25 10:31:01.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.049 [2024-07-25 10:31:01.573265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10f62c0) 00:24:12.049 [2024-07-25 10:31:01.573299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.049 [2024-07-25 10:31:01.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.049 00:24:12.049 Latency(us) 00:24:12.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.049 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:12.049 nvme0n1 : 2.01 16468.48 64.33 0.00 0.00 7762.54 3980.71 23204.60 00:24:12.049 =================================================================================================================== 00:24:12.049 Total : 16468.48 64.33 0.00 0.00 7762.54 3980.71 23204.60 00:24:12.049 0 00:24:12.049 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:12.049 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:12.049 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:12.049 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:12.049 | .driver_specific 00:24:12.049 | .nvme_error 00:24:12.049 | .status_code 00:24:12.049 | .command_transient_transport_error' 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 129 > 0 )) 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1579871 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1579871 ']' 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1579871 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579871 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579871' 00:24:12.365 killing process with pid 1579871 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1579871 00:24:12.365 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.365 00:24:12.365 Latency(us) 00:24:12.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.365 =================================================================================================================== 00:24:12.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.365 10:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1579871 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1580270 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1580270 /var/tmp/bperf.sock 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1580270 ']' 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.622 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.622 [2024-07-25 10:31:02.177742] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:12.622 [2024-07-25 10:31:02.177835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580270 ] 00:24:12.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.622 Zero copy mechanism will not be used. 00:24:12.622 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.622 [2024-07-25 10:31:02.237424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.622 [2024-07-25 10:31:02.354140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.880 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.880 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:12.880 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.880 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.138 10:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.396 nvme0n1 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:13.396 10:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.654 Zero copy mechanism will not be used. 00:24:13.654 Running I/O for 2 seconds... 00:24:13.654 [2024-07-25 10:31:03.262493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.262563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.262583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.270461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.270505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.270525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.277896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.277931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.277950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.285609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.285645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.285663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.292744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.292777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.292795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.301344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.301378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.301396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.310737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.310771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.310790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.320002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.320036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.320055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.329391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.329425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.339075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.339110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.339129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.348975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.349018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.349036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.357458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.357505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.357534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.365776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.365815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.365833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.373670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.373707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.373725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.382839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.382877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.382896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.392094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.392133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.392152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.401553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.401591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.401610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.410908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.654 [2024-07-25 10:31:03.420628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.654 [2024-07-25 10:31:03.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.654 [2024-07-25 10:31:03.420690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.912 [2024-07-25 10:31:03.430824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.912 [2024-07-25 10:31:03.430864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.912 [2024-07-25 10:31:03.430882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.912 [2024-07-25 10:31:03.441105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.912 [2024-07-25 10:31:03.441159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.912 [2024-07-25 10:31:03.441179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.912 [2024-07-25 10:31:03.450056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.912 [2024-07-25 10:31:03.450095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.912 [2024-07-25 10:31:03.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.912 [2024-07-25 10:31:03.458345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.912 [2024-07-25 10:31:03.458383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.912 [2024-07-25 10:31:03.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.463103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.463139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.463157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.471489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.471526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.471544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.480412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.480449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.480467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.488241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.488278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.488296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.496681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.496720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.496738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.504925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.504962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.504980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.512085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.512121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.512140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.519232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.519268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.519285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.527388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.534550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.534586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.534604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.541632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.541668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.541686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.548770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.548804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.548821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.555884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.555917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.555935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.563100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.563135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.570218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.570253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.570283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.577309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.577344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.577361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.584416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.584449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.584467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.591514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.591547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.591566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.598637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.598670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.598688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.605739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.605774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.605792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.612892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.612944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.619941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.619976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.619993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.627121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.627155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.627173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.634227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.634263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.634280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.641324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.641359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.641376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.648317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.648352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.648370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.913 [2024-07-25 10:31:03.655339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.913 [2024-07-25 10:31:03.655374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.913 [2024-07-25 10:31:03.655391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.914 [2024-07-25 10:31:03.662544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.914 [2024-07-25 10:31:03.662578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-25 10:31:03.662597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.914 [2024-07-25 10:31:03.669607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.914 [2024-07-25 10:31:03.669641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-25 10:31:03.669659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-25 10:31:03.676714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.914 [2024-07-25 10:31:03.676750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-25 10:31:03.676768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.914 [2024-07-25 10:31:03.683876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:13.914 [2024-07-25 10:31:03.683912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-25 10:31:03.683930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.691446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.172 [2024-07-25 10:31:03.691492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.172 [2024-07-25 10:31:03.691523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.698779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.172 [2024-07-25 10:31:03.698816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.172 [2024-07-25 10:31:03.698833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.705970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.172 [2024-07-25 10:31:03.706005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.172 [2024-07-25 10:31:03.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.713120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.172 [2024-07-25 10:31:03.713154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.172 [2024-07-25 10:31:03.713172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.720284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.172 [2024-07-25 10:31:03.720319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.172 [2024-07-25 10:31:03.720336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.172 [2024-07-25 10:31:03.727514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.727549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.727566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.734632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.734667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.734684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.741702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.741738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.741755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.748818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.748853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.748872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.755948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.756020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.763067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.763102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.763120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.770136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.770171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.770188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.777675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.777710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.777728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.784775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.784810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.784827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.791887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.791923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.791940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.800283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.800318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.800337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.809491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.809527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.809545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.818358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.818395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.818414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.827515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.827554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.827572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.835221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.835258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.835276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.842363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.842398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.842415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.849522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.849555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.849573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.856691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.856726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.856743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.864062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.864099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.864117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.871183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.871235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.878271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.878307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.878324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.885814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.885851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.885884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.893944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.893982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.894000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.901715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.901752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.901771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.909351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.173 [2024-07-25 10:31:03.909390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.173 [2024-07-25 10:31:03.909409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.173 [2024-07-25 10:31:03.916656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.174 [2024-07-25 10:31:03.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.174 [2024-07-25 10:31:03.916711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.174 [2024-07-25 10:31:03.923715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.174 [2024-07-25 10:31:03.923750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.174 [2024-07-25 10:31:03.923768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.174 [2024-07-25 10:31:03.930753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.174 [2024-07-25 10:31:03.930788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.174 [2024-07-25 10:31:03.930807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.174 [2024-07-25 10:31:03.937752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.174 [2024-07-25 10:31:03.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.174 [2024-07-25 10:31:03.937804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.174 [2024-07-25 10:31:03.944723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.174 [2024-07-25 10:31:03.944758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.174 [2024-07-25 10:31:03.944776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.952019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.952089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.959217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.959253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.966288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.966321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.966339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.973308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.973343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.973361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.980344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.980379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.980396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.987380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.987415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.987432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:03.994445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:03.994489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:03.994510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.001508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.001543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.001561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.008571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.008607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.008625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.015720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.022826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.022861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.022878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.030723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.030759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.030777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.037826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.037861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.037879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.044927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.044963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.432 [2024-07-25 10:31:04.044981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.432 [2024-07-25 10:31:04.052014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.432 [2024-07-25 10:31:04.052048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.052066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.059066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.059100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.059117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.066085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.066139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.073114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.073149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.073182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.080227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.080261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.080278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.087326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.087361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.087379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.094309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.094343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.094361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.101357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.101393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.101411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.108448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.108489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.108510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.115523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.115555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.115573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.122548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.122578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.122596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.129632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.129664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.129682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.136792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.136841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.143886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.143918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.150459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.150498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.150517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.157588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.157621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.157638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.164703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.164735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.171780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.171812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.171830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.178866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.178899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.178917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.186592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.186625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.186644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.195341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.195376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.195402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.433 [2024-07-25 10:31:04.204395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.433 [2024-07-25 10:31:04.204429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-07-25 10:31:04.204448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.213326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.213362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.691 [2024-07-25 10:31:04.213380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.222215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.222250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.691 [2024-07-25 10:31:04.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.231475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.231519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.691 [2024-07-25 10:31:04.231537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.239952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.239986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.691 [2024-07-25 10:31:04.240004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.247221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.247254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.691 [2024-07-25 10:31:04.247271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.691 [2024-07-25 10:31:04.251091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.691 [2024-07-25 10:31:04.251123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.251141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.258145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.258178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.258197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.265228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.265270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.265288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.272326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.272358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.272376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.279755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.279789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.279807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.287801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.287836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.287854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.295529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.295563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.295582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.303352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.303394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.303412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.310493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.310525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.310543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.317551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.317583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.317601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.324585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.324624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.324641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.331636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.331667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.331685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.338843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.338877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.338895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.345938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.345971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.345989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.352937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.352969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.352986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.359901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.359933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.359951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.366858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.366897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.366914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.373834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.373865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.373882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.380784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.380815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.380832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.387742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.387775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.394758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.394807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.401735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.401765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.401783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.408718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.408749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.408766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.415660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.415709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.422601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.422632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.422650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.429555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.429585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.429603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.436566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.436598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.436615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.443565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.443596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.443613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.450511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.450553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.450571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.457444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.457477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.457502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.692 [2024-07-25 10:31:04.464448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.692 [2024-07-25 10:31:04.464478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.692 [2024-07-25 10:31:04.464504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.471537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.471570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.471588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.478582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.478613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.478631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.485586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.485617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.485635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.492617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.492647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.492664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.499660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.499691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.499708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.506684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.506715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.506732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.513735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.513767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.520738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.520770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.520788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.527816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.527847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.527864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.534771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.534810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.534827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.542587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.542619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.542636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.549598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.549630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.549648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.556611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.556644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.563638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.563669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.563687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.570375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.570406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.570431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.577443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.577474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.577500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.584193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.584225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.584242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.591241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.591271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.591289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.598132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.598163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.598180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.605006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.605036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.611981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.612036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.612053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.619013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.951 [2024-07-25 10:31:04.619044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.951 [2024-07-25 10:31:04.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.951 [2024-07-25 10:31:04.625852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.625883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.625901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.632848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.632894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.632912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.639834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.639867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.639884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.646782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.646813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.646831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.653739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.653775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.653792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.660836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.660877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.660895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.667808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.667841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.667858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.674708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.674739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.674756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.681599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.681630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.681648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.688700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.688733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.688763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.695364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.695398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.695416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.701999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.702032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.702051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.708940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.708972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.708989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.715987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.716027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.716045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.952 [2024-07-25 10:31:04.723028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:14.952 [2024-07-25 10:31:04.723060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.952 [2024-07-25 10:31:04.723078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.730184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.730218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.730235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.737256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.737289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.737307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.744291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.744323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.751318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.751359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.751378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.758273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.758305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.758323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.765265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.765298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.765315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.772311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.772342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.772360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.779227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.779266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.779284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.786256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.786287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.786305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.794206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.794239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.794256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.801192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.801223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.801240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.808148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.808180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.808198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.815157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.815188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.822106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.822137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.822154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.829123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.829156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.829174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.836153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.836186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.836203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.843103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.843134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.843152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.850183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.850217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.850235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.857171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.857203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.857221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.864115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.864154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.864172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.871104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.871135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.211 [2024-07-25 10:31:04.871159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.211 [2024-07-25 10:31:04.878039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.211 [2024-07-25 10:31:04.878070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.878087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.885029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.885061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.885078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.892067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.892099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.892116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.899019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.899069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.906036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.906069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.906087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.913057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.913106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.919987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.920017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.920035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.926912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.926944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.926962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.933986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.934024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.934042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.941001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.941032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.948013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.948043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.948062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.954990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.955021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.955038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.962061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.962093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.962116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.969091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.969121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.969139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.976059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.976105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.976122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.212 [2024-07-25 10:31:04.983062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.212 [2024-07-25 10:31:04.983096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.212 [2024-07-25 10:31:04.983113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:04.990177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:04.990210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:04.990228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:04.997259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:04.997291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:04.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.004284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.004316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.004334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.011281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.011340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.018320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.018351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.018368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.025331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.025380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.032302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.032336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.032353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.039305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.039337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.039354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.047313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.047361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.054308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.054340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.054364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.061329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.061369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.061386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.068310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.068342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.068359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.075295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.471 [2024-07-25 10:31:05.075342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.471 [2024-07-25 10:31:05.075360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.471 [2024-07-25 10:31:05.082284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.082316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.082333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.089315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.089350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.089368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.096305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.096337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.096354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.103282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.103313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.103331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.110408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.110464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.117423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.117455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.117473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.124525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.124581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.131597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.131640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.131658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.138720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.138771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.145992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.146033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.146052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.153087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.153124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.153143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.160199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.160234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.160252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.167383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.167421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.167440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.174574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.174609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.174635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.181640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.181676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.188256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.188294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.195352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.195388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.195406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.202460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.202696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.202716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.209784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.209820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.209838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.216830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.216864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.216881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.224006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.224042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.224060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.231110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.231144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.231161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.238182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.238229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.238247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.472 [2024-07-25 10:31:05.245235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.472 [2024-07-25 10:31:05.245271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.472 [2024-07-25 10:31:05.245290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.730 [2024-07-25 10:31:05.252299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e779d0) 00:24:15.730 [2024-07-25 10:31:05.252334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.730 [2024-07-25 10:31:05.252352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.730 00:24:15.730 Latency(us) 00:24:15.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.730 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:15.730 nvme0n1 : 2.00 4242.14 530.27 0.00 0.00 3767.09 922.36 10437.21 00:24:15.730 =================================================================================================================== 00:24:15.730 Total : 4242.14 530.27 0.00 0.00 3767.09 922.36 10437.21 00:24:15.730 0 00:24:15.730 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:15.730 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:15.730 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:15.730 | .driver_specific 00:24:15.730 | .nvme_error 00:24:15.730 | .status_code 00:24:15.730 | .command_transient_transport_error' 00:24:15.730 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 273 > 0 )) 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1580270 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1580270 ']' 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1580270 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:15.988 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580270 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580270' 00:24:15.989 killing process with pid 1580270 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1580270 00:24:15.989 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.989 00:24:15.989 Latency(us) 00:24:15.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.989 =================================================================================================================== 00:24:15.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.989 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1580270 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1580587 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1580587 /var/tmp/bperf.sock 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1580587 ']' 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.247 10:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.247 [2024-07-25 10:31:05.865003] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:16.247 [2024-07-25 10:31:05.865102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580587 ] 00:24:16.247 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.247 [2024-07-25 10:31:05.926009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.505 [2024-07-25 10:31:06.042517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.505 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.505 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:16.505 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.505 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.762 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.328 nvme0n1 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:17.328 10:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.328 Running I/O for 2 seconds... 00:24:17.328 [2024-07-25 10:31:06.997335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190ee5c8 00:24:17.328 [2024-07-25 10:31:06.998427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:06.998468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.011666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190f7970 00:24:17.328 [2024-07-25 10:31:07.012506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.012538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.026034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e4de8 00:24:17.328 [2024-07-25 10:31:07.027066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.039011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190f2d80 00:24:17.328 [2024-07-25 10:31:07.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.040909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.050766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190fe720 00:24:17.328 [2024-07-25 10:31:07.051609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.051639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.065085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e2c28 00:24:17.328 [2024-07-25 10:31:07.066114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.066144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.079359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190fdeb0 00:24:17.328 [2024-07-25 10:31:07.080594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.080625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.328 [2024-07-25 10:31:07.093649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190fa7d8 00:24:17.328 [2024-07-25 10:31:07.095052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.328 [2024-07-25 10:31:07.095082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.586 [2024-07-25 10:31:07.108093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190de038 00:24:17.586 [2024-07-25 10:31:07.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.586 [2024-07-25 10:31:07.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.586 [2024-07-25 10:31:07.122369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e4578 00:24:17.586 [2024-07-25 10:31:07.124167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.586 [2024-07-25 10:31:07.124196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.586 [2024-07-25 10:31:07.136673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e6b70 00:24:17.586 [2024-07-25 10:31:07.138668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.138698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.150963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190f8e88 00:24:17.587 [2024-07-25 10:31:07.153151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.153181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.160716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190fef90 00:24:17.587 [2024-07-25 10:31:07.161574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.161604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.175033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e6b70 00:24:17.587 [2024-07-25 10:31:07.176071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.176101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.189294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e0ea0 00:24:17.587 [2024-07-25 10:31:07.190525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.190554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.203570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190e5658 00:24:17.587 [2024-07-25 10:31:07.204988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.205018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.217816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190f4298 00:24:17.587 [2024-07-25 10:31:07.219427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.219456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.229387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190ea248 00:24:17.587 [2024-07-25 10:31:07.230040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.230070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.243656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190fdeb0 00:24:17.587 [2024-07-25 10:31:07.244513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.244542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.257473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.258649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.258679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.272248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.272502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.272532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.286938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.287188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.287216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.301598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.301838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.301866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.316228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.316473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.316514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.330879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.331121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.331149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.345530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.345773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.345801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.587 [2024-07-25 10:31:07.360190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.587 [2024-07-25 10:31:07.360445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.587 [2024-07-25 10:31:07.360475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.375027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.375274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.375303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.389659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.389900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.389929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.404263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.404507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.404536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.418915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.419158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.433552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.433796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.433825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.448179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.448430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.448459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.462820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.463061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.463089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.477425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.846 [2024-07-25 10:31:07.477678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.846 [2024-07-25 10:31:07.477706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.846 [2024-07-25 10:31:07.492075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.492347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.506709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.506953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.506982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.521336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.521594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.521623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.536056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.536299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.550681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.550921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.550949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.565320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.565572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.565601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.579943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.580186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.594611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.594856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.594885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.847 [2024-07-25 10:31:07.609232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:17.847 [2024-07-25 10:31:07.609475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.847 [2024-07-25 10:31:07.609510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.624029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.624304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.638884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.639131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.639160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.653569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.653843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.668314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.668574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.668604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.682976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.683224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.697606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.697850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.697891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.712172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.712414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.712442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.726791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.727030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.727058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.741465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.741747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.741778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.756230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.756475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.756511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.770920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.771163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.771191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.785836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.786080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.786108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.800674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.800919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.815344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.106 [2024-07-25 10:31:07.815600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.106 [2024-07-25 10:31:07.815628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.106 [2024-07-25 10:31:07.830030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.107 [2024-07-25 10:31:07.830280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.107 [2024-07-25 10:31:07.830308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.107 [2024-07-25 10:31:07.844785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.107 [2024-07-25 10:31:07.845026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.107 [2024-07-25 10:31:07.845055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.107 [2024-07-25 10:31:07.859496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.107 [2024-07-25 10:31:07.859741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.107 [2024-07-25 10:31:07.859769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.107 [2024-07-25 10:31:07.874134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.107 [2024-07-25 10:31:07.874376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.107 [2024-07-25 10:31:07.874404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.889312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.889609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.903944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.904191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.904218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.918634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.918879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.918907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.933279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.933524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.933553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.948114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.948358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.948387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.962742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.962987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.963015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.977344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.977598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.977627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:07.991964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:07.992204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:07.992232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:08.006581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.365 [2024-07-25 10:31:08.006827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.365 [2024-07-25 10:31:08.006854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.365 [2024-07-25 10:31:08.021237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.021488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.021517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.035860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.036104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.036132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.050547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.050791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.050820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.065172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.065415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.065443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.079776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.080019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.080055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.094404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.094654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.094683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.109003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.109246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.109274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.123614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.123858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.123886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.366 [2024-07-25 10:31:08.138309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.366 [2024-07-25 10:31:08.138575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.366 [2024-07-25 10:31:08.138604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.153328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.153618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.167981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.168224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.168254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.182691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.182937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.182966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.197343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.197597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.197627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.211992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.212276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.226610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.226883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.241240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.241520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.255852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.256096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.256124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.270460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.270740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.285082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.285324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.299706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.299947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.299976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.314564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.314811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.314839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.329310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.329566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.329594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.344161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.344414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.344442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.358770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.359019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.359047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.373607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.373877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.625 [2024-07-25 10:31:08.388259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.625 [2024-07-25 10:31:08.388502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.625 [2024-07-25 10:31:08.388531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.403206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.403453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.403487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.418021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.418268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.432620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.432869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.432898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.447340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.447593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.447622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.461969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.462212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.462249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.476570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.476812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.476849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.491205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.491447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.491476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.505751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.505993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.506022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.520487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.520745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.520775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.535126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.535400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.535429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.549928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.550177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.550207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.564691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.564952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.564982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.579467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.579725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.579754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.594208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.594469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.594507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.609009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.609256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.609285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.623778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.624026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.624056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.638684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.638937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.638968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.885 [2024-07-25 10:31:08.653416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:18.885 [2024-07-25 10:31:08.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.885 [2024-07-25 10:31:08.653706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.144 [2024-07-25 10:31:08.668764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.144 [2024-07-25 10:31:08.669019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.144 [2024-07-25 10:31:08.669050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.144 [2024-07-25 10:31:08.683710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.144 [2024-07-25 10:31:08.683956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.144 [2024-07-25 10:31:08.683986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.144 [2024-07-25 10:31:08.698539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.144 [2024-07-25 10:31:08.698788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.144 [2024-07-25 10:31:08.698817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.144 [2024-07-25 10:31:08.713488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.144 [2024-07-25 10:31:08.713741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.713770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.728316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.728575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.728605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.743245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.743500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.743537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.758035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.758279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.758309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.772816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.773062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.773092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.787500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.787748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.787777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.802160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.802406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.816965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.817210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.817239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.831721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.831968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.831997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.846495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.846741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.846780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.861246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.861529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.875946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.876190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.876220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.890629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.890877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.890906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.905376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.905632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.905662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.145 [2024-07-25 10:31:08.920301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.145 [2024-07-25 10:31:08.920564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.145 [2024-07-25 10:31:08.920595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.404 [2024-07-25 10:31:08.935428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.404 [2024-07-25 10:31:08.935686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.404 [2024-07-25 10:31:08.935716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.404 [2024-07-25 10:31:08.950225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.404 [2024-07-25 10:31:08.950477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.404 [2024-07-25 10:31:08.950516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.404 [2024-07-25 10:31:08.965250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.404 [2024-07-25 10:31:08.965503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.404 [2024-07-25 10:31:08.965540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.404 [2024-07-25 10:31:08.979975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f4d0) with pdu=0x2000190df118 00:24:19.404 [2024-07-25 10:31:08.980238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.404 [2024-07-25 10:31:08.980268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:19.404 00:24:19.404 Latency(us) 00:24:19.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.404 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:19.404 nvme0n1 : 2.01 17496.98 68.35 0.00 0.00 7297.27 3519.53 15243.19 00:24:19.404 =================================================================================================================== 00:24:19.404 Total : 17496.98 68.35 0.00 0.00 7297.27 3519.53 15243.19 00:24:19.404 0 00:24:19.404 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:19.404 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:19.404 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:19.404 | .driver_specific 00:24:19.404 | .nvme_error 00:24:19.404 | .status_code 00:24:19.404 | .command_transient_transport_error' 00:24:19.404 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1580587 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1580587 ']' 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1580587 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580587 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580587' 00:24:19.663 killing process with pid 1580587 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1580587 00:24:19.663 Received shutdown signal, test time was about 2.000000 seconds 00:24:19.663 00:24:19.663 Latency(us) 00:24:19.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.663 =================================================================================================================== 00:24:19.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.663 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1580587 00:24:19.921 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:19.921 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1580896 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1580896 /var/tmp/bperf.sock 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1580896 ']' 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.922 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.922 [2024-07-25 10:31:09.599372] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:19.922 [2024-07-25 10:31:09.599467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580896 ] 00:24:19.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.922 Zero copy mechanism will not be used. 00:24:19.922 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.922 [2024-07-25 10:31:09.660638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.180 [2024-07-25 10:31:09.779899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.180 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.180 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:20.180 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.180 10:31:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.437 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.437 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.437 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.438 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.438 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.438 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.003 nvme0n1 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:21.003 10:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.003 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:21.003 Zero copy mechanism will not be used. 00:24:21.003 Running I/O for 2 seconds... 00:24:21.003 [2024-07-25 10:31:10.702545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.702956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.702998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.713577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.713962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.713998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.724517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.724940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.735256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.735646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.735680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.746041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.746442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.746477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.757001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.757152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.768050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.768447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.768489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.003 [2024-07-25 10:31:10.779093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.003 [2024-07-25 10:31:10.779506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.003 [2024-07-25 10:31:10.779551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.789906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.790296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.790331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.800611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.800992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.801027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.811642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.812027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.812061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.823007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.823395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.823429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.833929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.834324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.834358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.844895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.845286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.845320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.856118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.856516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.856558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.867350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.867869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.877679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.878043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.878084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.888337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.888690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.888724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.898626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.899066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.909548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.909877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.909911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.919441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.919828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.929445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.929849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.929884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.939746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.940134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.940168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.949980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.950371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.960054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.960378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.960412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.970352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.262 [2024-07-25 10:31:10.970762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.262 [2024-07-25 10:31:10.970797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.262 [2024-07-25 10:31:10.980176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:10.980558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:10.980592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.263 [2024-07-25 10:31:10.990168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:10.990560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:10.990595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.263 [2024-07-25 10:31:11.000225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:11.000605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:11.000640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.263 [2024-07-25 10:31:11.010143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:11.010549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:11.010583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.263 [2024-07-25 10:31:11.020684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:11.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:11.021045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.263 [2024-07-25 10:31:11.030669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.263 [2024-07-25 10:31:11.031102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.263 [2024-07-25 10:31:11.031136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.041176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.041599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.041635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.051169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.051596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.051638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.061461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.061877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.061911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.071935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.072330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.072364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.082018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.082460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.082503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.092003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.092436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.101975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.102328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.102361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.112111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.112519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.112553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.122865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.123227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.123262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.132496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.132890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.132924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.142362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.142765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.142799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.152304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.152640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.152675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.163025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.163392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.163426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.173234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.173607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.173642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.182990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.183398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.522 [2024-07-25 10:31:11.183433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.522 [2024-07-25 10:31:11.193180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.522 [2024-07-25 10:31:11.193533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.193567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.203434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.203895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.203929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.213819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.214237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.214271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.224190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.224598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.234221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.234667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.234701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.244651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.245047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.245081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.254829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.255212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.255248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.264826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.265240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.265274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.275099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.275462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.275508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.285126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.285498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.285532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.523 [2024-07-25 10:31:11.295440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.523 [2024-07-25 10:31:11.295794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.523 [2024-07-25 10:31:11.295830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.305942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.306315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.306349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.315955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.316294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.316335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.325986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.326372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.326407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.336134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.336566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.336600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.346140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.346524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.346558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.355933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.356303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.356338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.366040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.366500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.366534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.376383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.376777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.386852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.387344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.387379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.397218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.397594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.397628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.407620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.781 [2024-07-25 10:31:11.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.781 [2024-07-25 10:31:11.408015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.781 [2024-07-25 10:31:11.417845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.418210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.418244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.428261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.428617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.437885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.438250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.438286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.447828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.448222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.448256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.457983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.458373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.458407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.468375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.468761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.468796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.478628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.478961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.488945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.489283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.498958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.499294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.499328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.509179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.509612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.509647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.519333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.519708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.519742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.529172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.529579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.539361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.539819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.539854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.782 [2024-07-25 10:31:11.549697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:21.782 [2024-07-25 10:31:11.550082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.782 [2024-07-25 10:31:11.550117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.040 [2024-07-25 10:31:11.559655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.040 [2024-07-25 10:31:11.560063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.570098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.570495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.570531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.580363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.580778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.580820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.590863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.591209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.591243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.601007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.601469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.601524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.611542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.611946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.611980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.621702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.622125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.622159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.631811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.632210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.632245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.641732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.642108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.642143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.651712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.652068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.652102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.661850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.662248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.662283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.671914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.672309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.672344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.682197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.682605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.682639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.692139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.692502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.692536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.702147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.702550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.702585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.712474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.712815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.712848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.722330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.722737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.722774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.732700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.733071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.733106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.742984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.753180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.753578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.753613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.763400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.763813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.763848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.773628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.774006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.783339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.783740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.783777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.793703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.794074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.803849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.804231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.804266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.041 [2024-07-25 10:31:11.814004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.041 [2024-07-25 10:31:11.814355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.041 [2024-07-25 10:31:11.814390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.824088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.824472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.824518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.833969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.834339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.834373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.843856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.844294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.844337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.854077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.854461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.854505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.864302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.864691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.864725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.874115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.874517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.874551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.884415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.884864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.884899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.894758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.895198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.904933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.905306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.915200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.915545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.925405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.300 [2024-07-25 10:31:11.925851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.300 [2024-07-25 10:31:11.925888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.300 [2024-07-25 10:31:11.935677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.936059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.936094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.946234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.946626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.946678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.956318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.956684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.956718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.966415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.966819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.966855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.976521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.976913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.976946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.986677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.987075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.987109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:11.997097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:11.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:11.997573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.007064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.007421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.007456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.017445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.017921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.027738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.028109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.028143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.037707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.038032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.038066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.047637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.048035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.048070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.057494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.057890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.057924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.301 [2024-07-25 10:31:12.067695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.301 [2024-07-25 10:31:12.068090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.301 [2024-07-25 10:31:12.068125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.078229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.078603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.078638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.088489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.088820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.098510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.098879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.098913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.108653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.108986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.109021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.119100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.119527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.119561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.129133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.129542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.139446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.139859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.139902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.149901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.150247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.150282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.159913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.160274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.160309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.170090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.170508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.170542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.180299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.180672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.180706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.190871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.191272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.191306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.201673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.202096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.202130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.212112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.212433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.212469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.222299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.222675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.222709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.232450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.232853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.232887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.242791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.243184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.560 [2024-07-25 10:31:12.243219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.560 [2024-07-25 10:31:12.252729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.560 [2024-07-25 10:31:12.253169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.253203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.262739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.263176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.272779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.273234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.282634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.282959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.282999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.292587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.292986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.293021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.302532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.302906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.302940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.312578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.312932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.312967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.322249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.322652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.322686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.561 [2024-07-25 10:31:12.332040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.561 [2024-07-25 10:31:12.332388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.561 [2024-07-25 10:31:12.332422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.819 [2024-07-25 10:31:12.342052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.342469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.342513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.352377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.352837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.352875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.362637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.363068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.363103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.372865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.373329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.373364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.382911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.383306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.383340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.392965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.393324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.393358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.403055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.403389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.403425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.412981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.413308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.413342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.422917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.423367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.423402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.432831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.433212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.433246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.442564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.442952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.442985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.452912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.453241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.453275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.463275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.463695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.463730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.473144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.473552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.473586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.483324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.483698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.483732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.493516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.493932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.493966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.503493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.503847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.503881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.513382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.513803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.513837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.523464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.523888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.523922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.533442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.533779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.533814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.543640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.553610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.553985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.554019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.563841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.564281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.573860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.574272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.574307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.584606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.584936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.584970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.820 [2024-07-25 10:31:12.594586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:22.820 [2024-07-25 10:31:12.594997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.820 [2024-07-25 10:31:12.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.605271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.605640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.605675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.615321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.615721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.615755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.625590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.625967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.626002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.635762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.636150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.636184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.645971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.646371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.646406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.655980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.656375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.656410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.666166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.666643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.676360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.676725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.676759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.079 [2024-07-25 10:31:12.686836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8f810) with pdu=0x2000190fef90 00:24:23.079 [2024-07-25 10:31:12.687217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.079 [2024-07-25 10:31:12.687251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.079 00:24:23.079 Latency(us) 00:24:23.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.080 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:23.080 nvme0n1 : 2.01 3021.31 377.66 0.00 0.00 5281.99 4490.43 16311.18 00:24:23.080 =================================================================================================================== 00:24:23.080 Total : 3021.31 377.66 0.00 0.00 5281.99 4490.43 16311.18 00:24:23.080 0 00:24:23.080 10:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:23.080 10:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:23.080 10:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:23.080 10:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:23.080 | .driver_specific 00:24:23.080 | .nvme_error 00:24:23.080 | .status_code 00:24:23.080 | .command_transient_transport_error' 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1580896 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1580896 ']' 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1580896 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580896 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580896' 00:24:23.339 killing process with pid 1580896 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1580896 00:24:23.339 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.339 00:24:23.339 Latency(us) 00:24:23.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.339 =================================================================================================================== 00:24:23.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.339 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1580896 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1579844 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1579844 ']' 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1579844 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579844 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579844' 00:24:23.600 killing process with pid 1579844 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1579844 00:24:23.600 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1579844 00:24:23.868 00:24:23.868 real 0m15.637s 00:24:23.868 user 0m32.249s 00:24:23.868 sys 0m3.828s 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.868 ************************************ 00:24:23.868 END TEST nvmf_digest_error 00:24:23.868 ************************************ 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.868 rmmod nvme_tcp 00:24:23.868 rmmod nvme_fabrics 00:24:23.868 rmmod nvme_keyring 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1579844 ']' 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1579844 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1579844 ']' 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1579844 00:24:23.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1579844) - No such process 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1579844 is not found' 00:24:23.868 Process with pid 1579844 is not found 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.868 10:31:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.414 00:24:26.414 real 0m35.761s 00:24:26.414 user 1m5.465s 00:24:26.414 sys 0m9.206s 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.414 ************************************ 00:24:26.414 END TEST nvmf_digest 00:24:26.414 ************************************ 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.414 ************************************ 00:24:26.414 START TEST nvmf_bdevperf 00:24:26.414 ************************************ 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:26.414 * Looking for test storage... 00:24:26.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.414 10:31:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:27.793 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:27.793 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:27.793 Found net devices under 0000:08:00.0: cvl_0_0 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.793 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:27.794 Found net devices under 0000:08:00.1: cvl_0_1 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:24:27.794 00:24:27.794 --- 10.0.0.2 ping statistics --- 00:24:27.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.794 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:24:27.794 00:24:27.794 --- 10.0.0.1 ping statistics --- 00:24:27.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.794 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1582792 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1582792 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1582792 ']' 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.794 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.053 [2024-07-25 10:31:17.578400] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:28.053 [2024-07-25 10:31:17.578509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.053 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.053 [2024-07-25 10:31:17.645293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.053 [2024-07-25 10:31:17.765818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.053 [2024-07-25 10:31:17.765886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.053 [2024-07-25 10:31:17.765902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.053 [2024-07-25 10:31:17.765916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.053 [2024-07-25 10:31:17.765927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.053 [2024-07-25 10:31:17.766012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.053 [2024-07-25 10:31:17.766335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.053 [2024-07-25 10:31:17.766338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 [2024-07-25 10:31:17.903420] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 Malloc0 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 [2024-07-25 10:31:17.962939] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.312 { 00:24:28.312 "params": { 00:24:28.312 "name": "Nvme$subsystem", 00:24:28.312 "trtype": "$TEST_TRANSPORT", 00:24:28.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.312 "adrfam": "ipv4", 00:24:28.312 "trsvcid": "$NVMF_PORT", 00:24:28.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.312 "hdgst": ${hdgst:-false}, 00:24:28.312 "ddgst": ${ddgst:-false} 00:24:28.312 }, 00:24:28.312 "method": "bdev_nvme_attach_controller" 00:24:28.312 } 00:24:28.312 EOF 00:24:28.312 )") 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:28.312 10:31:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:28.312 "params": { 00:24:28.312 "name": "Nvme1", 00:24:28.312 "trtype": "tcp", 00:24:28.312 "traddr": "10.0.0.2", 00:24:28.313 "adrfam": "ipv4", 00:24:28.313 "trsvcid": "4420", 00:24:28.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.313 "hdgst": false, 00:24:28.313 "ddgst": false 00:24:28.313 }, 00:24:28.313 "method": "bdev_nvme_attach_controller" 00:24:28.313 }' 00:24:28.313 [2024-07-25 10:31:18.015258] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:28.313 [2024-07-25 10:31:18.015349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582831 ] 00:24:28.313 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.313 [2024-07-25 10:31:18.076899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.571 [2024-07-25 10:31:18.196366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.829 Running I/O for 1 seconds... 00:24:30.203 00:24:30.203 Latency(us) 00:24:30.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:30.204 Verification LBA range: start 0x0 length 0x4000 00:24:30.204 Nvme1n1 : 1.05 7074.70 27.64 0.00 0.00 17311.20 3665.16 45438.29 00:24:30.204 =================================================================================================================== 00:24:30.204 Total : 7074.70 27.64 0.00 0.00 17311.20 3665.16 45438.29 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1583024 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.204 { 00:24:30.204 "params": { 00:24:30.204 "name": "Nvme$subsystem", 00:24:30.204 "trtype": "$TEST_TRANSPORT", 00:24:30.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.204 "adrfam": "ipv4", 00:24:30.204 "trsvcid": "$NVMF_PORT", 00:24:30.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.204 "hdgst": ${hdgst:-false}, 00:24:30.204 "ddgst": ${ddgst:-false} 00:24:30.204 }, 00:24:30.204 "method": "bdev_nvme_attach_controller" 00:24:30.204 } 00:24:30.204 EOF 00:24:30.204 )") 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:30.204 10:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:30.204 "params": { 00:24:30.204 "name": "Nvme1", 00:24:30.204 "trtype": "tcp", 00:24:30.204 "traddr": "10.0.0.2", 00:24:30.204 "adrfam": "ipv4", 00:24:30.204 "trsvcid": "4420", 00:24:30.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.204 "hdgst": false, 00:24:30.204 "ddgst": false 00:24:30.204 }, 00:24:30.204 "method": "bdev_nvme_attach_controller" 00:24:30.204 }' 00:24:30.204 [2024-07-25 10:31:19.845424] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:30.204 [2024-07-25 10:31:19.845529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583024 ] 00:24:30.204 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.204 [2024-07-25 10:31:19.907265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.462 [2024-07-25 10:31:20.026845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.462 Running I/O for 15 seconds... 00:24:33.749 10:31:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1582792 00:24:33.749 10:31:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:33.749 [2024-07-25 10:31:22.809804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.809852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.809884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.809902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.809922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.809939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.809956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.809972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.809990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.749 [2024-07-25 10:31:22.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.749 [2024-07-25 10:31:22.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.810973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.810989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.750 [2024-07-25 10:31:22.811777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.750 [2024-07-25 10:31:22.811793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.811965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.811986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.751 [2024-07-25 10:31:22.812799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.812978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.812992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.813009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.813023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.751 [2024-07-25 10:31:22.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.751 [2024-07-25 10:31:22.813054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.752 [2024-07-25 10:31:22.813327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.813978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.813993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.814010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.814024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.814041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.814056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.814077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.752 [2024-07-25 10:31:22.814092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.814108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5bc0 is same with the state(5) to be set 00:24:33.752 [2024-07-25 10:31:22.814125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.752 [2024-07-25 10:31:22.814138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.752 [2024-07-25 10:31:22.814152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:8 PRP1 0x0 PRP2 0x0 00:24:33.752 [2024-07-25 10:31:22.814170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.752 [2024-07-25 10:31:22.814225] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ff5bc0 was disconnected and freed. reset controller. 00:24:33.752 [2024-07-25 10:31:22.818271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.752 [2024-07-25 10:31:22.818342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.752 [2024-07-25 10:31:22.819169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.752 [2024-07-25 10:31:22.819220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.752 [2024-07-25 10:31:22.819238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.752 [2024-07-25 10:31:22.819513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.752 [2024-07-25 10:31:22.819783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.752 [2024-07-25 10:31:22.819805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.752 [2024-07-25 10:31:22.819822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.752 [2024-07-25 10:31:22.823880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.752 [2024-07-25 10:31:22.832947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.833465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.833562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.833581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.833852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.834118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.834141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.834157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.838198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.847485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.847952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.848004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.848022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.848286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.848561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.848593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.848608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.852635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.861921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.862367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.862410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.862430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.862712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.862981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.863005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.863020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.867060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.876340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.876900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.876946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.876965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.877237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.877516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.877541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.877557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.881616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.890745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.891309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.891351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.891370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.891655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.891931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.891955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.891970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.896027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.905103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.905697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.905740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.905759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.906029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.906299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.906322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.906339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.910397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.919470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.920120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.920164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.920184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.920458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.920741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.920766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.920782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.924853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.933991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.934598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.934666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.934692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.934963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.935232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.935256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.935271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.939365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.948515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.949107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.949151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.949170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.949447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.949729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.949754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.949769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.953808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.962918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.753 [2024-07-25 10:31:22.963362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.753 [2024-07-25 10:31:22.963404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.753 [2024-07-25 10:31:22.963424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.753 [2024-07-25 10:31:22.963715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.753 [2024-07-25 10:31:22.963993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.753 [2024-07-25 10:31:22.964016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.753 [2024-07-25 10:31:22.964032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.753 [2024-07-25 10:31:22.968067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.753 [2024-07-25 10:31:22.977364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:22.977993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:22.978036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:22.978056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:22.978326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:22.978609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:22.978634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:22.978650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:22.982701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:22.991755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:22.992356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:22.992398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:22.992425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:22.992708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:22.992978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:22.993001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:22.993016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:22.997050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.006107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.006665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.006708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.006728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.007005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:23.007274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:23.007297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:23.007313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:23.011372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.020441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.021020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.021062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.021082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.021353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:23.021635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:23.021659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:23.021675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:23.025711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.035009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.035567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.035619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.035639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.035909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:23.036178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:23.036208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:23.036224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:23.040281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.049358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.049886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.049955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.050226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:23.050506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:23.050531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:23.050552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:23.054591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.063898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.064393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.064442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.064461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.064742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.754 [2024-07-25 10:31:23.065017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.754 [2024-07-25 10:31:23.065041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.754 [2024-07-25 10:31:23.065056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.754 [2024-07-25 10:31:23.069212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.754 [2024-07-25 10:31:23.078445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.754 [2024-07-25 10:31:23.079051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.754 [2024-07-25 10:31:23.079098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.754 [2024-07-25 10:31:23.079120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.754 [2024-07-25 10:31:23.079397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.079679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.079704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.079720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.083799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.092876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.093467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.093519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.093539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.093816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.094086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.094109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.094125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.098163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.107289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.107876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.107918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.107937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.108208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.108476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.108515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.108532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.112629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.121983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.122621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.122664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.122684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.122954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.123223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.123249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.123265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.127352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.136528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.137119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.137162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.137182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.137458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.137741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.137766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.137782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.141871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.151050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.151660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.151705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.151724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.152001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.152269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.152293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.152309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.156389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.165584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.166166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.166209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.166230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.166514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.166791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.166814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.166830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.170916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.180107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.180632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.180685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.180703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.180968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.181236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.181260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.181282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.185372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.194604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.195112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.195163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.195180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.195450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.755 [2024-07-25 10:31:23.195730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.755 [2024-07-25 10:31:23.195757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.755 [2024-07-25 10:31:23.195772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.755 [2024-07-25 10:31:23.199867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.755 [2024-07-25 10:31:23.208986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.755 [2024-07-25 10:31:23.209499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.755 [2024-07-25 10:31:23.209529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.755 [2024-07-25 10:31:23.209546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.755 [2024-07-25 10:31:23.209810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.210086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.210112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.210128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.214225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.223338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.223946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.223989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.224009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.224279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.224563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.224588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.224604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.228689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.237853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.238259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.238293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.238310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.238590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.238858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.238881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.238896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.242962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.252319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.252898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.252941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.252961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.253231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.253515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.253540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.253555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.257621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.266806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.267408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.267451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.267470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.267754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.268022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.268046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.268062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.272166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.281356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.281959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.282002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.282021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.282292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.282582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.282607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.282622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.286697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.295860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.296440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.296492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.296513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.296784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.297052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.297080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.297095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.301140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.310219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.310843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.310886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.310905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.311176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.311445] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.311468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.311498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.315551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.324773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.325379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.325423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.325442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.325725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.326000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.326024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.326040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.330170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.756 [2024-07-25 10:31:23.339383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.756 [2024-07-25 10:31:23.339946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.756 [2024-07-25 10:31:23.339989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.756 [2024-07-25 10:31:23.340009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.756 [2024-07-25 10:31:23.340279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.756 [2024-07-25 10:31:23.340562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.756 [2024-07-25 10:31:23.340587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.756 [2024-07-25 10:31:23.340603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.756 [2024-07-25 10:31:23.344686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.353818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.354391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.354434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.354454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.354740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.355010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.355035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.355051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.359141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.368358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.368910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.368961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.368980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.369245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.369525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.369549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.369565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.373644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.382776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.383302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.383352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.383375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.383652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.383920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.383945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.383961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.388050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.397232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.397748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.397812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.397847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.398111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.398380] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.398405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.398420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.402529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.411691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.412241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.412283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.412302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.412594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.412869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.412895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.412911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.416988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.426100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.426638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.426681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.426700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.426971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.427240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.427269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.427285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.431382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.440567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.441157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.441213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.441233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.441518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.441787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.441812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.441828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.445903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.455094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.455699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.455742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.455761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.456032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.456300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.456329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.456344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.460441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.469661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.470214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.470256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.470276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.757 [2024-07-25 10:31:23.470562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.757 [2024-07-25 10:31:23.470830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.757 [2024-07-25 10:31:23.470854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.757 [2024-07-25 10:31:23.470869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.757 [2024-07-25 10:31:23.474921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.757 [2024-07-25 10:31:23.484055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.757 [2024-07-25 10:31:23.484680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.757 [2024-07-25 10:31:23.484725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.757 [2024-07-25 10:31:23.484744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.758 [2024-07-25 10:31:23.485015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.758 [2024-07-25 10:31:23.485284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.758 [2024-07-25 10:31:23.485308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.758 [2024-07-25 10:31:23.485324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.758 [2024-07-25 10:31:23.489401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.758 [2024-07-25 10:31:23.498626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.758 [2024-07-25 10:31:23.499216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.758 [2024-07-25 10:31:23.499259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.758 [2024-07-25 10:31:23.499278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.758 [2024-07-25 10:31:23.499570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.758 [2024-07-25 10:31:23.499840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.758 [2024-07-25 10:31:23.499863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.758 [2024-07-25 10:31:23.499878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.758 [2024-07-25 10:31:23.503955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.758 [2024-07-25 10:31:23.513082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.758 [2024-07-25 10:31:23.513700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.758 [2024-07-25 10:31:23.513743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:33.758 [2024-07-25 10:31:23.513763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:33.758 [2024-07-25 10:31:23.514033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:33.758 [2024-07-25 10:31:23.514301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.758 [2024-07-25 10:31:23.514325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.758 [2024-07-25 10:31:23.514341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.758 [2024-07-25 10:31:23.518468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.017 [2024-07-25 10:31:23.527726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.017 [2024-07-25 10:31:23.528297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.017 [2024-07-25 10:31:23.528340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.017 [2024-07-25 10:31:23.528364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.017 [2024-07-25 10:31:23.528652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.017 [2024-07-25 10:31:23.528921] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.017 [2024-07-25 10:31:23.528945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.017 [2024-07-25 10:31:23.528961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.017 [2024-07-25 10:31:23.533025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.017 [2024-07-25 10:31:23.542247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.017 [2024-07-25 10:31:23.542820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.017 [2024-07-25 10:31:23.542863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.017 [2024-07-25 10:31:23.542883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.543154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.543421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.543445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.543461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.547589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.556818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.557327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.557368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.557387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.557671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.557940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.557965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.557981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.562047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.571335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.571817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.571860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.571880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.572150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.572425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.572460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.572493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.576615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.585718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.586330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.586372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.586391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.586673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.586943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.586966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.586982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.591046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.600166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.600770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.600827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.600846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.601117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.601385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.601410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.601425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.605504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.614642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.615172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.615213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.615232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.615523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.615793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.615816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.615832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.619906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.629038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.629567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.629660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.629680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.629951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.630221] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.630244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.630260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.634319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.643422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.643885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.643937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.643954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.644226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.644512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.644537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.644552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.648593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.657945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.658530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.658587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.658607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.658877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.659146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.659170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.659186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.663243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.018 [2024-07-25 10:31:23.672395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.018 [2024-07-25 10:31:23.672968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.018 [2024-07-25 10:31:23.673011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.018 [2024-07-25 10:31:23.673031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.018 [2024-07-25 10:31:23.673308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.018 [2024-07-25 10:31:23.673592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.018 [2024-07-25 10:31:23.673617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.018 [2024-07-25 10:31:23.673632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.018 [2024-07-25 10:31:23.677694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.686762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.687363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.687406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.687425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.687709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.687979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.688003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.688018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.692072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.701161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.701774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.701818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.701837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.702107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.702377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.702401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.702416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.706473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.715609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.716197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.716240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.716259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.716542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.716812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.716836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.716859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.720947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.730011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.730577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.730620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.730640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.730911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.731181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.731205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.731221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.735278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.744365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.744974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.745016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.745038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.745310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.745595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.745620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.745635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.749685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.758809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.759267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.759315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.759333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.759609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.759878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.759902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.759917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.763961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.773355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.773907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.773968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.773986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.774250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.774528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.774552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.774567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.778617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.019 [2024-07-25 10:31:23.787725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.019 [2024-07-25 10:31:23.788208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.019 [2024-07-25 10:31:23.788258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.019 [2024-07-25 10:31:23.788274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.019 [2024-07-25 10:31:23.788548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.019 [2024-07-25 10:31:23.788817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.019 [2024-07-25 10:31:23.788840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.019 [2024-07-25 10:31:23.788855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.019 [2024-07-25 10:31:23.792966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.279 [2024-07-25 10:31:23.802159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.279 [2024-07-25 10:31:23.802715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.279 [2024-07-25 10:31:23.802770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.279 [2024-07-25 10:31:23.802787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.279 [2024-07-25 10:31:23.803051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.279 [2024-07-25 10:31:23.803319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.279 [2024-07-25 10:31:23.803342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.279 [2024-07-25 10:31:23.803357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.279 [2024-07-25 10:31:23.807409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.279 [2024-07-25 10:31:23.816764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.279 [2024-07-25 10:31:23.817228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.279 [2024-07-25 10:31:23.817278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.279 [2024-07-25 10:31:23.817296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.279 [2024-07-25 10:31:23.817577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.279 [2024-07-25 10:31:23.817852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.279 [2024-07-25 10:31:23.817876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.279 [2024-07-25 10:31:23.817891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.279 [2024-07-25 10:31:23.821935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.279 [2024-07-25 10:31:23.831343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.279 [2024-07-25 10:31:23.831788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.279 [2024-07-25 10:31:23.831844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.279 [2024-07-25 10:31:23.831862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.279 [2024-07-25 10:31:23.832125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.279 [2024-07-25 10:31:23.832394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.279 [2024-07-25 10:31:23.832417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.279 [2024-07-25 10:31:23.832433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.279 [2024-07-25 10:31:23.836467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.279 [2024-07-25 10:31:23.845745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.279 [2024-07-25 10:31:23.846275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.279 [2024-07-25 10:31:23.846327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.279 [2024-07-25 10:31:23.846344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.279 [2024-07-25 10:31:23.846615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.279 [2024-07-25 10:31:23.846883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.279 [2024-07-25 10:31:23.846907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.279 [2024-07-25 10:31:23.846922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.279 [2024-07-25 10:31:23.850985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.279 [2024-07-25 10:31:23.860338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.279 [2024-07-25 10:31:23.860914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.860970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.860992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.861264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.861546] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.861570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.861585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.865639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.874759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.875270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.875311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.875330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.875622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.875892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.875916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.875931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.880001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.889101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.889604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.889697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.889717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.889993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.890263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.890288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.890304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.894394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.903513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.904081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.904122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.904141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.904417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.904698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.904722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.904738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.908831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.917967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.918464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.918502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.918527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.918793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.919061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.919084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.919099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.923134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.932417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.932902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.932944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.932963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.933234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.933514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.933538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.933553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.937585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.946883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.947307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.947338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.947356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.947629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.947898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.947921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.947937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.951975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.961269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.961733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.961776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.961795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.962066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.962336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.962367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.962383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.966441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.975794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.976311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.976343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.976361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.976637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.976906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.976929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.976944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.980991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:23.990304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:23.990859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.280 [2024-07-25 10:31:23.990902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.280 [2024-07-25 10:31:23.990922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.280 [2024-07-25 10:31:23.991192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.280 [2024-07-25 10:31:23.991462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.280 [2024-07-25 10:31:23.991496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.280 [2024-07-25 10:31:23.991514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.280 [2024-07-25 10:31:23.995563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.280 [2024-07-25 10:31:24.004680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.280 [2024-07-25 10:31:24.005267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.281 [2024-07-25 10:31:24.005310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.281 [2024-07-25 10:31:24.005329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.281 [2024-07-25 10:31:24.005613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.281 [2024-07-25 10:31:24.005885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.281 [2024-07-25 10:31:24.005911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.281 [2024-07-25 10:31:24.005926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.281 [2024-07-25 10:31:24.010005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.281 [2024-07-25 10:31:24.019113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.281 [2024-07-25 10:31:24.019649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.281 [2024-07-25 10:31:24.019707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.281 [2024-07-25 10:31:24.019727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.281 [2024-07-25 10:31:24.019998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.281 [2024-07-25 10:31:24.020267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.281 [2024-07-25 10:31:24.020291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.281 [2024-07-25 10:31:24.020306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.281 [2024-07-25 10:31:24.024337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.281 [2024-07-25 10:31:24.033622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.281 [2024-07-25 10:31:24.034074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.281 [2024-07-25 10:31:24.034106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.281 [2024-07-25 10:31:24.034123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.281 [2024-07-25 10:31:24.034388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.281 [2024-07-25 10:31:24.034664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.281 [2024-07-25 10:31:24.034690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.281 [2024-07-25 10:31:24.034705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.281 [2024-07-25 10:31:24.038740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.281 [2024-07-25 10:31:24.048023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.281 [2024-07-25 10:31:24.048434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.281 [2024-07-25 10:31:24.048465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.281 [2024-07-25 10:31:24.048489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.281 [2024-07-25 10:31:24.048756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.281 [2024-07-25 10:31:24.049023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.281 [2024-07-25 10:31:24.049047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.281 [2024-07-25 10:31:24.049062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.281 [2024-07-25 10:31:24.053169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.062582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.063074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.063107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.063125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.063397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.063675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.063700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.063715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.067748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.077028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.077454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.077493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.077513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.077777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.078057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.078081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.078101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.082208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.091525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.092034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.092076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.092095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.092376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.092665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.092690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.092706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.096738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.106027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.106475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.106527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.106546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.106817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.107086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.107109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.107132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.111166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.120532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.121122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.121181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.121200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.121471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.121753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.121777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.121792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.125823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.134870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.135350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.135398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.135416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.135690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.135959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.135982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.135997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.140035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.149264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.149849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.149892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.149911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.150182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.150452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.150491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.543 [2024-07-25 10:31:24.150510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.543 [2024-07-25 10:31:24.154543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.543 [2024-07-25 10:31:24.163598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.543 [2024-07-25 10:31:24.164147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.543 [2024-07-25 10:31:24.164190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.543 [2024-07-25 10:31:24.164210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.543 [2024-07-25 10:31:24.164491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.543 [2024-07-25 10:31:24.164761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.543 [2024-07-25 10:31:24.164784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.164799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.168844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.178140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.178600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.178641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.178661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.178932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.179201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.179224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.179239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.183282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.192608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.193053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.193110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.193127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.193392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.193670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.193694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.193710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.197793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.207143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.207697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.207753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.207773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.208050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.208319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.208342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.208358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.212398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.221545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.222115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.222168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.222188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.222458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.222752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.222776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.222792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.226841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.235936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.236410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.236452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.236472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.236757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.237028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.237052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.237068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.241116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.250415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.251014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.251057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.251076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.251353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.251636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.251661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.251683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.255797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.264978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.265452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.265504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.265525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.265811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.266080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.266104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.266120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.270188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.279550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.280025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.280071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.280089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.280353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.280633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.280657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.280673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.284753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.293943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.294393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.294443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.294461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.294742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.295009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.544 [2024-07-25 10:31:24.295034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.544 [2024-07-25 10:31:24.295049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.544 [2024-07-25 10:31:24.299099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.544 [2024-07-25 10:31:24.308464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.544 [2024-07-25 10:31:24.308988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.544 [2024-07-25 10:31:24.309076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.544 [2024-07-25 10:31:24.309096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.544 [2024-07-25 10:31:24.309367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.544 [2024-07-25 10:31:24.309656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.545 [2024-07-25 10:31:24.309682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.545 [2024-07-25 10:31:24.309697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.545 [2024-07-25 10:31:24.313805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.806 [2024-07-25 10:31:24.323062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.806 [2024-07-25 10:31:24.323613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.806 [2024-07-25 10:31:24.323656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.806 [2024-07-25 10:31:24.323676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.806 [2024-07-25 10:31:24.323970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.806 [2024-07-25 10:31:24.324250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.806 [2024-07-25 10:31:24.324275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.806 [2024-07-25 10:31:24.324291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.806 [2024-07-25 10:31:24.328396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.806 [2024-07-25 10:31:24.337603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.806 [2024-07-25 10:31:24.338100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.806 [2024-07-25 10:31:24.338143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.806 [2024-07-25 10:31:24.338162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.806 [2024-07-25 10:31:24.338432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.806 [2024-07-25 10:31:24.338713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.806 [2024-07-25 10:31:24.338738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.806 [2024-07-25 10:31:24.338754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.806 [2024-07-25 10:31:24.342827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.806 [2024-07-25 10:31:24.351965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.806 [2024-07-25 10:31:24.352460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.806 [2024-07-25 10:31:24.352516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.806 [2024-07-25 10:31:24.352535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.806 [2024-07-25 10:31:24.352800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.806 [2024-07-25 10:31:24.353074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.806 [2024-07-25 10:31:24.353098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.806 [2024-07-25 10:31:24.353113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.806 [2024-07-25 10:31:24.357166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.806 [2024-07-25 10:31:24.366516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.806 [2024-07-25 10:31:24.367100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.806 [2024-07-25 10:31:24.367143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.806 [2024-07-25 10:31:24.367162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.806 [2024-07-25 10:31:24.367433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.806 [2024-07-25 10:31:24.367716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.806 [2024-07-25 10:31:24.367741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.806 [2024-07-25 10:31:24.367756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.806 [2024-07-25 10:31:24.371875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.806 [2024-07-25 10:31:24.381034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.806 [2024-07-25 10:31:24.381633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.806 [2024-07-25 10:31:24.381676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.806 [2024-07-25 10:31:24.381696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.381967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.382235] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.382258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.382274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.386387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.395586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.396181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.396224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.396243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.396528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.396798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.396822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.396837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.400906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.410010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.410452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.410547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.410565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.410832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.411099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.411122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.411138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.415232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.424408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.424929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.424970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.424990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.425260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.425551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.425586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.425603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.429672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.438823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.439410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.439453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.439473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.439757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.440026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.440051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.440067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.444119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.453303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.453909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.453952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.453978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.454256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.454539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.454564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.454580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.458655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.467803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.468400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.468443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.468463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.468750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.469019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.469044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.469060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.473109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.482276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.482871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.482913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.482933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.483203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.483470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.483511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.483528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.487626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.496771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.497284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.497316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.807 [2024-07-25 10:31:24.497333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.807 [2024-07-25 10:31:24.497610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.807 [2024-07-25 10:31:24.497879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.807 [2024-07-25 10:31:24.497909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.807 [2024-07-25 10:31:24.497925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.807 [2024-07-25 10:31:24.501983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.807 [2024-07-25 10:31:24.511343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.807 [2024-07-25 10:31:24.511871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.807 [2024-07-25 10:31:24.511916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.808 [2024-07-25 10:31:24.511934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.808 [2024-07-25 10:31:24.512198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.808 [2024-07-25 10:31:24.512464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.808 [2024-07-25 10:31:24.512503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.808 [2024-07-25 10:31:24.512519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.808 [2024-07-25 10:31:24.516624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.808 [2024-07-25 10:31:24.525776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.808 [2024-07-25 10:31:24.526216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.808 [2024-07-25 10:31:24.526260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.808 [2024-07-25 10:31:24.526277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.808 [2024-07-25 10:31:24.526554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.808 [2024-07-25 10:31:24.526821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.808 [2024-07-25 10:31:24.526844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.808 [2024-07-25 10:31:24.526860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.808 [2024-07-25 10:31:24.530920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.808 [2024-07-25 10:31:24.540275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.808 [2024-07-25 10:31:24.540792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.808 [2024-07-25 10:31:24.540841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.808 [2024-07-25 10:31:24.540859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.808 [2024-07-25 10:31:24.541122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.808 [2024-07-25 10:31:24.541388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.808 [2024-07-25 10:31:24.541413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.808 [2024-07-25 10:31:24.541428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.808 [2024-07-25 10:31:24.545533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.808 [2024-07-25 10:31:24.554711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.808 [2024-07-25 10:31:24.555179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.808 [2024-07-25 10:31:24.555210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.808 [2024-07-25 10:31:24.555227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.808 [2024-07-25 10:31:24.555503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.808 [2024-07-25 10:31:24.555770] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.808 [2024-07-25 10:31:24.555793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.808 [2024-07-25 10:31:24.555808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.808 [2024-07-25 10:31:24.559861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.808 [2024-07-25 10:31:24.569235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.808 [2024-07-25 10:31:24.569718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.808 [2024-07-25 10:31:24.569749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:34.808 [2024-07-25 10:31:24.569767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:34.808 [2024-07-25 10:31:24.570031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:34.808 [2024-07-25 10:31:24.570297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.808 [2024-07-25 10:31:24.570320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.808 [2024-07-25 10:31:24.570336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.808 [2024-07-25 10:31:24.574412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.583685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.584253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.584295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.584315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.584599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.584878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.584903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.584919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.589074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.598253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.598805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.598874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.599144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.599412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.599435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.599451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.603561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.612762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.613311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.613363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.613381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.613663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.613932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.613957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.613972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.618053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.627259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.627768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.627814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.627831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.628101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.628367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.628391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.628406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.632466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.641680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.642259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.642302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.642322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.642607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.642877] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.642900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.642922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.646997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.656144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.656553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.656604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.656870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.657137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.657161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.657177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.661262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.670643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.671185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.671228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.671247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.671533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.070 [2024-07-25 10:31:24.671802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.070 [2024-07-25 10:31:24.671825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.070 [2024-07-25 10:31:24.671841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.070 [2024-07-25 10:31:24.675944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.070 [2024-07-25 10:31:24.685084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.070 [2024-07-25 10:31:24.685606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.070 [2024-07-25 10:31:24.685663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.070 [2024-07-25 10:31:24.685683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.070 [2024-07-25 10:31:24.685955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.686223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.686247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.686262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.690345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.699504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.700084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.700126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.700145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.700416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.700702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.700728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.700744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.704814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.713967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.714497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.714590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.714610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.714880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.715148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.715172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.715188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.719303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.728423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.728941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.728987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.729006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.729276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.729560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.729585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.729600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.733691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.742883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.743449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.743501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.743522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.743799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.744068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.744092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.744108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.748192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.757330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.757934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.757978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.757997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.758268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.758552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.758577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.758592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.762659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.771861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.772395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.772437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.772456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.772738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.773007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.773031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.773046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.777120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.786287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.786856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.786920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.786938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.787209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.787476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.787513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.787535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.791605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.800795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.801316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.801376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.801417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.801693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.801960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.801983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.801999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.806077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.815229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.815774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.071 [2024-07-25 10:31:24.815826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.071 [2024-07-25 10:31:24.815844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.071 [2024-07-25 10:31:24.816107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.071 [2024-07-25 10:31:24.816384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.071 [2024-07-25 10:31:24.816410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.071 [2024-07-25 10:31:24.816425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.071 [2024-07-25 10:31:24.820517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.071 [2024-07-25 10:31:24.829655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.071 [2024-07-25 10:31:24.830141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.072 [2024-07-25 10:31:24.830192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.072 [2024-07-25 10:31:24.830209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.072 [2024-07-25 10:31:24.830473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.072 [2024-07-25 10:31:24.830751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.072 [2024-07-25 10:31:24.830775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.072 [2024-07-25 10:31:24.830791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.072 [2024-07-25 10:31:24.834849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.072 [2024-07-25 10:31:24.844341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.072 [2024-07-25 10:31:24.844881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.072 [2024-07-25 10:31:24.844937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.072 [2024-07-25 10:31:24.844956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.072 [2024-07-25 10:31:24.845237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.072 [2024-07-25 10:31:24.845534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.072 [2024-07-25 10:31:24.845559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.072 [2024-07-25 10:31:24.845575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.333 [2024-07-25 10:31:24.849670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.333 [2024-07-25 10:31:24.858841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.333 [2024-07-25 10:31:24.859342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.333 [2024-07-25 10:31:24.859393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.333 [2024-07-25 10:31:24.859411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.333 [2024-07-25 10:31:24.859686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.333 [2024-07-25 10:31:24.859954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.333 [2024-07-25 10:31:24.859977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.333 [2024-07-25 10:31:24.859992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.333 [2024-07-25 10:31:24.864089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.333 [2024-07-25 10:31:24.873280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.333 [2024-07-25 10:31:24.873782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.333 [2024-07-25 10:31:24.873834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.333 [2024-07-25 10:31:24.873851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.333 [2024-07-25 10:31:24.874115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.333 [2024-07-25 10:31:24.874381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.333 [2024-07-25 10:31:24.874404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.874419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.878476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.887887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.888410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.888458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.888733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.889007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.889031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.889046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.893121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.902285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.902836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.902887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.902904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.903167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.903433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.903456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.903471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.907579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.916723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.917182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.917212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.917228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.917501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.917769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.917792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.917807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.921871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.931206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.932095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.932127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.932145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.932410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.932691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.932715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.932731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.936775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.945748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.946325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.946377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.946394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.946669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.946937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.946960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.946976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.951056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.960263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.960712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.960760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.960778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.961041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.961309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.961334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.961350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.965468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.974776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.975356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.975412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.975432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.975722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.975991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.976015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.976030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.980097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:24.989239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:24.989841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:24.989885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:24.989911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:24.990182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:24.990450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:24.990473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:24.990504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:24.994603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:25.003794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:25.004342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:25.004391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:25.004408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:25.004685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:25.004953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:25.004977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.334 [2024-07-25 10:31:25.004992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.334 [2024-07-25 10:31:25.009062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.334 [2024-07-25 10:31:25.018195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.334 [2024-07-25 10:31:25.018702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.334 [2024-07-25 10:31:25.018733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.334 [2024-07-25 10:31:25.018751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.334 [2024-07-25 10:31:25.019015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.334 [2024-07-25 10:31:25.019282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.334 [2024-07-25 10:31:25.019306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.019321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.023426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.032595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.033114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.033165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.033183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.033446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.033724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.033754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.033770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.037854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.047026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.047663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.047706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.047725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.047995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.048263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.048287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.048302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.052370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.061633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.062245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.062288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.062308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.062599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.062868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.062892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.062907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.066993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.076137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.076737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.076780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.076799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.077070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.077337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.077362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.077378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.081443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.090652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.091189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.091233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.091252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.091536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.091812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.091836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.091852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-07-25 10:31:25.095972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-07-25 10:31:25.105179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-07-25 10:31:25.105735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-07-25 10:31:25.105786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-07-25 10:31:25.105803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.335 [2024-07-25 10:31:25.106077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.335 [2024-07-25 10:31:25.106367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-07-25 10:31:25.106392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-07-25 10:31:25.106407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.110519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.119757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.120265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.120333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.120366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.120640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.120907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.120931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.120947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.125033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.134167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.134669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.134717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.134734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.135004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.135273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.135296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.135312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.139358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.148707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.149274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.149317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.149336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.149619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.149889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.149912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.149928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.153967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.163033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.163591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.163633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.163653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.163924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.164193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.164216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.164232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.168281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.177439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.177926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.177968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.177987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.178258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.178539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.178564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.178593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.182635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.191928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.192460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.192526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.192547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.192817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.193086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.193110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.193126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.197160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.206477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.207057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.207116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.207136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.207406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.207698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.207723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.207739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.211779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.220837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.221322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.221352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.221370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.221642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.221910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.221933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.221949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.225981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.235269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.235729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.235771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.235790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.236061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.236330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.236354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.236370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.596 [2024-07-25 10:31:25.240412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.596 [2024-07-25 10:31:25.249707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.596 [2024-07-25 10:31:25.250133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.596 [2024-07-25 10:31:25.250165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.596 [2024-07-25 10:31:25.250182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.596 [2024-07-25 10:31:25.250453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.596 [2024-07-25 10:31:25.250730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.596 [2024-07-25 10:31:25.250754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.596 [2024-07-25 10:31:25.250769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.254813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.264133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.264562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.264614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.264637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.264915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.265192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.265215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.265231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.269269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.278565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.279023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.279053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.279069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.279339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.279622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.279646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.279661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.283716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.293048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.293575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.293618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.293638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.293909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.294177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.294200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.294215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.298257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.307584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.308147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.308204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.308223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.308504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.308773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.308797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.308812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.312883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.321981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.322504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.322536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.322554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.322818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.323085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.323108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.323124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.327180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.336522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.337033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.337083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.337100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.337363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.337644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.337668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.337684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.341749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.350925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.351387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.351432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.351450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.351725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.351994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.352016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.352031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.356094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.597 [2024-07-25 10:31:25.365459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.597 [2024-07-25 10:31:25.366037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.597 [2024-07-25 10:31:25.366088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.597 [2024-07-25 10:31:25.366105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.597 [2024-07-25 10:31:25.366369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.597 [2024-07-25 10:31:25.366649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.597 [2024-07-25 10:31:25.366674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.597 [2024-07-25 10:31:25.366689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.597 [2024-07-25 10:31:25.370800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.857 [2024-07-25 10:31:25.380001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.857 [2024-07-25 10:31:25.380533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.857 [2024-07-25 10:31:25.380564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.857 [2024-07-25 10:31:25.380587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.857 [2024-07-25 10:31:25.380853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.857 [2024-07-25 10:31:25.381121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.857 [2024-07-25 10:31:25.381145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.857 [2024-07-25 10:31:25.381160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.857 [2024-07-25 10:31:25.385238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.857 [2024-07-25 10:31:25.394360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.857 [2024-07-25 10:31:25.394914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.857 [2024-07-25 10:31:25.394957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.857 [2024-07-25 10:31:25.394976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.857 [2024-07-25 10:31:25.395247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.857 [2024-07-25 10:31:25.395529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.857 [2024-07-25 10:31:25.395553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.857 [2024-07-25 10:31:25.395569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.857 [2024-07-25 10:31:25.399606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.857 [2024-07-25 10:31:25.408735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.857 [2024-07-25 10:31:25.409309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.857 [2024-07-25 10:31:25.409366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.857 [2024-07-25 10:31:25.409386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.857 [2024-07-25 10:31:25.409670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.857 [2024-07-25 10:31:25.409939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.857 [2024-07-25 10:31:25.409963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.857 [2024-07-25 10:31:25.409978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.857 [2024-07-25 10:31:25.414033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.857 [2024-07-25 10:31:25.423163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.857 [2024-07-25 10:31:25.423758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.857 [2024-07-25 10:31:25.423801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.857 [2024-07-25 10:31:25.423820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.857 [2024-07-25 10:31:25.424091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.857 [2024-07-25 10:31:25.424367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.424390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.424405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.428454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.437560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.438114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.438157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.438176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.438447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.438727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.438752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.438768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.442844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.451964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.452504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.452561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.452581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.452851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.453121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.453144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.453160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.457222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.466563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.467122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.467171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.467189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.467454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.467731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.467756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.467771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.471832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.480987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.481569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.481612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.481632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.481902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.482171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.482195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.482210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.486270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.495381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.495883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.495939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.495957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.496221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.496500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.496524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.496539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.500583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.509868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.510387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.510432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.510450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.510727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.510997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.511021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.511036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.515082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.524421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.524925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.524976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.525000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.525264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.525549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.525573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.525588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.529615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.538920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.539417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.539467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.539493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.539759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.540027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.540050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.540065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.544108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.553450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.553995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.554045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.554062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.554332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.554612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.554635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.858 [2024-07-25 10:31:25.554650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.858 [2024-07-25 10:31:25.558699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.858 [2024-07-25 10:31:25.568023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.858 [2024-07-25 10:31:25.568494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.858 [2024-07-25 10:31:25.568525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.858 [2024-07-25 10:31:25.568542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.858 [2024-07-25 10:31:25.568807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.858 [2024-07-25 10:31:25.569074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.858 [2024-07-25 10:31:25.569111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.859 [2024-07-25 10:31:25.569127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.859 [2024-07-25 10:31:25.573199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.859 [2024-07-25 10:31:25.582559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.859 [2024-07-25 10:31:25.583133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.859 [2024-07-25 10:31:25.583188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.859 [2024-07-25 10:31:25.583208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.859 [2024-07-25 10:31:25.583478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.859 [2024-07-25 10:31:25.583760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.859 [2024-07-25 10:31:25.583783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.859 [2024-07-25 10:31:25.583799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.859 [2024-07-25 10:31:25.587847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.859 [2024-07-25 10:31:25.596969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.859 [2024-07-25 10:31:25.597448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.859 [2024-07-25 10:31:25.597500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.859 [2024-07-25 10:31:25.597522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.859 [2024-07-25 10:31:25.597793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.859 [2024-07-25 10:31:25.598068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.859 [2024-07-25 10:31:25.598092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.859 [2024-07-25 10:31:25.598107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.859 [2024-07-25 10:31:25.602221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.859 [2024-07-25 10:31:25.611359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.859 [2024-07-25 10:31:25.611960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.859 [2024-07-25 10:31:25.612003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.859 [2024-07-25 10:31:25.612022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.859 [2024-07-25 10:31:25.612293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.859 [2024-07-25 10:31:25.612578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.859 [2024-07-25 10:31:25.612602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.859 [2024-07-25 10:31:25.612618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.859 [2024-07-25 10:31:25.616687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.859 [2024-07-25 10:31:25.625808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.859 [2024-07-25 10:31:25.626380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.859 [2024-07-25 10:31:25.626423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:35.859 [2024-07-25 10:31:25.626442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:35.859 [2024-07-25 10:31:25.626724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:35.859 [2024-07-25 10:31:25.626994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.859 [2024-07-25 10:31:25.627017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.859 [2024-07-25 10:31:25.627032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.859 [2024-07-25 10:31:25.631139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.640345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.640951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.640993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.641013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.641283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.641565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.641589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.641605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.645660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.654828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.655412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.655456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.655476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.655760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.656030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.656053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.656069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.660139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.669253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.669732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.669764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.669782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.670053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.670322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.670345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.670361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.674434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.683771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.684346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.684401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.684420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.684703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.684974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.684997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.685012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.689066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.698173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.698748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.698791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.698810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.699081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.699351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.699376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.699391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.703460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.712589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.713084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.713138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.713155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.713420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.713699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.713723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.713745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.717779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.727135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.727626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.727677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.727695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.727959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.728226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.728251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.728266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.732361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.741519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.742011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.742069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.742110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.742374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.742651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.742675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.742690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.746742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.756063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.756631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.756707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.756727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.756998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.757268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.757293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.119 [2024-07-25 10:31:25.757309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.119 [2024-07-25 10:31:25.761391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.119 [2024-07-25 10:31:25.770565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.119 [2024-07-25 10:31:25.771150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.119 [2024-07-25 10:31:25.771210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.119 [2024-07-25 10:31:25.771233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.119 [2024-07-25 10:31:25.771517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.119 [2024-07-25 10:31:25.771787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.119 [2024-07-25 10:31:25.771810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.771826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.775880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.784993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.785527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.785582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.785602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.785872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.786141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.786166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.786181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.790241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.799595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.800171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.800226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.800246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.800530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.800800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.800824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.800839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1582792 Killed "${NVMF_APP[@]}" "$@" 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.120 [2024-07-25 10:31:25.804897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1583536 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1583536 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1583536 ']' 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.120 10:31:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.120 [2024-07-25 10:31:25.813967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.814465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.814531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.814551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.814823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.815093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.815116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.815133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.819178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.828505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.828956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.828996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.829015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.829286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.829566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.829590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.829606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.833637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.842949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.843418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.843490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.843770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.844040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.844063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.844079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.848110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.857507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.857965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.858008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.858027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.858305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.858585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.858609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.858625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.120 [2024-07-25 10:31:25.859464] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:36.120 [2024-07-25 10:31:25.859562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.120 [2024-07-25 10:31:25.862657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.120 [2024-07-25 10:31:25.871938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.120 [2024-07-25 10:31:25.872368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.120 [2024-07-25 10:31:25.872401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.120 [2024-07-25 10:31:25.872418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.120 [2024-07-25 10:31:25.872692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.120 [2024-07-25 10:31:25.872961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.120 [2024-07-25 10:31:25.872984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.120 [2024-07-25 10:31:25.873000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.121 [2024-07-25 10:31:25.877027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.121 [2024-07-25 10:31:25.886320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.121 [2024-07-25 10:31:25.886759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.121 [2024-07-25 10:31:25.886791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.121 [2024-07-25 10:31:25.886809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.121 [2024-07-25 10:31:25.887074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.121 [2024-07-25 10:31:25.887350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.121 [2024-07-25 10:31:25.887374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.121 [2024-07-25 10:31:25.887389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.121 [2024-07-25 10:31:25.891446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.382 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.382 [2024-07-25 10:31:25.900889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.382 [2024-07-25 10:31:25.901331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-25 10:31:25.901362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.382 [2024-07-25 10:31:25.901380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.382 [2024-07-25 10:31:25.901653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.382 [2024-07-25 10:31:25.901923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.382 [2024-07-25 10:31:25.901946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.382 [2024-07-25 10:31:25.901961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.382 [2024-07-25 10:31:25.905991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.382 [2024-07-25 10:31:25.915272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.382 [2024-07-25 10:31:25.915731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-25 10:31:25.915763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.382 [2024-07-25 10:31:25.915780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.382 [2024-07-25 10:31:25.916044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.382 [2024-07-25 10:31:25.916311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.382 [2024-07-25 10:31:25.916334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.382 [2024-07-25 10:31:25.916349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.382 [2024-07-25 10:31:25.920380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.382 [2024-07-25 10:31:25.929675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.382 [2024-07-25 10:31:25.930114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-25 10:31:25.930144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.382 [2024-07-25 10:31:25.930160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.382 [2024-07-25 10:31:25.930424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.382 [2024-07-25 10:31:25.930701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.382 [2024-07-25 10:31:25.930725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.382 [2024-07-25 10:31:25.930740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.382 [2024-07-25 10:31:25.930961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.382 [2024-07-25 10:31:25.934807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.382 [2024-07-25 10:31:25.944144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.382 [2024-07-25 10:31:25.944823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-25 10:31:25.944865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.382 [2024-07-25 10:31:25.944885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.382 [2024-07-25 10:31:25.945159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.382 [2024-07-25 10:31:25.945430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:25.945453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:25.945472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:25.949536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:25.958635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:25.959211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:25.959251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:25.959270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:25.959550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:25.959821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:25.959845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:25.959862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:25.963911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:25.973198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:25.973781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:25.973822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:25.973840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:25.974110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:25.974380] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:25.974403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:25.974420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:25.978461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:25.987756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:25.988257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:25.988308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:25.988328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:25.988607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:25.988876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:25.988900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:25.988916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:25.992997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.002214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.002870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.002915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.002935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.003210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.003490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.003514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.003532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.007571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.016614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.017130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.017171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.017189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.017457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.017737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.017762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.017779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.021834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.031153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.031684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.031724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.031743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.032012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.032295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.032318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.032334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.036370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.045676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.046209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.046252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.046277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.046561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.046830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.046853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.046870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.050857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.383 [2024-07-25 10:31:26.050896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.383 [2024-07-25 10:31:26.050912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.383 [2024-07-25 10:31:26.050910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.050925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.383 [2024-07-25 10:31:26.050937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.383 [2024-07-25 10:31:26.054502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.383 [2024-07-25 10:31:26.054588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.383 [2024-07-25 10:31:26.054625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.383 [2024-07-25 10:31:26.060227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.060881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.060941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.060965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.061247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.061536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.061560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.061578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.065663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.383 [2024-07-25 10:31:26.074896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.383 [2024-07-25 10:31:26.075571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-25 10:31:26.075642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.383 [2024-07-25 10:31:26.075665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.383 [2024-07-25 10:31:26.075949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.383 [2024-07-25 10:31:26.076223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.383 [2024-07-25 10:31:26.076247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.383 [2024-07-25 10:31:26.076265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.383 [2024-07-25 10:31:26.080362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.384 [2024-07-25 10:31:26.089584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.384 [2024-07-25 10:31:26.090312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-25 10:31:26.090373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.384 [2024-07-25 10:31:26.090395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.384 [2024-07-25 10:31:26.090700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.384 [2024-07-25 10:31:26.090973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.384 [2024-07-25 10:31:26.090996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.384 [2024-07-25 10:31:26.091013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.384 [2024-07-25 10:31:26.095095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.384 [2024-07-25 10:31:26.104014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.384 [2024-07-25 10:31:26.104637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-25 10:31:26.104695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.384 [2024-07-25 10:31:26.104718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.384 [2024-07-25 10:31:26.105001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.384 [2024-07-25 10:31:26.105280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.384 [2024-07-25 10:31:26.105304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.384 [2024-07-25 10:31:26.105322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.384 [2024-07-25 10:31:26.109471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.384 [2024-07-25 10:31:26.118443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.384 [2024-07-25 10:31:26.119138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-25 10:31:26.119198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.384 [2024-07-25 10:31:26.119221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.384 [2024-07-25 10:31:26.119518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.384 [2024-07-25 10:31:26.119815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.384 [2024-07-25 10:31:26.119838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.384 [2024-07-25 10:31:26.119856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.384 [2024-07-25 10:31:26.123961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.384 [2024-07-25 10:31:26.133050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.384 [2024-07-25 10:31:26.133653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-25 10:31:26.133711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.384 [2024-07-25 10:31:26.133732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.384 [2024-07-25 10:31:26.134013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.384 [2024-07-25 10:31:26.134284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.384 [2024-07-25 10:31:26.134307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.384 [2024-07-25 10:31:26.134324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.384 [2024-07-25 10:31:26.138378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.384 [2024-07-25 10:31:26.147458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.384 [2024-07-25 10:31:26.148008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-25 10:31:26.148054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.384 [2024-07-25 10:31:26.148075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.384 [2024-07-25 10:31:26.148350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.384 [2024-07-25 10:31:26.148632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.384 [2024-07-25 10:31:26.148656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.384 [2024-07-25 10:31:26.148672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.384 [2024-07-25 10:31:26.152733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 [2024-07-25 10:31:26.161876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.162334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.162367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.162384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.162701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.162983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.163006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.163022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 [2024-07-25 10:31:26.167120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 [2024-07-25 10:31:26.176430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.176885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.176926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.176946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.177216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.177497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.177520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.177535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 [2024-07-25 10:31:26.181588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 [2024-07-25 10:31:26.191098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.191499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.191530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.191547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.191818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.192085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.192107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.192123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 [2024-07-25 10:31:26.193309] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.644 [2024-07-25 10:31:26.196156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 [2024-07-25 10:31:26.205536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.205929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.205959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.205976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.206246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.206522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.206546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.206560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 [2024-07-25 10:31:26.210634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 [2024-07-25 10:31:26.220033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.220432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.220461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.220478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.220756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.221023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.221045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.221060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 [2024-07-25 10:31:26.225120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 [2024-07-25 10:31:26.234502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.235143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.235190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.235210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.235493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.235774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.235797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.235814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 [2024-07-25 10:31:26.239906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 Malloc0 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 [2024-07-25 10:31:26.249033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.644 [2024-07-25 10:31:26.249647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.644 [2024-07-25 10:31:26.249698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.644 [2024-07-25 10:31:26.249721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.644 [2024-07-25 10:31:26.250000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.644 [2024-07-25 10:31:26.250274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.644 [2024-07-25 10:31:26.250297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.644 [2024-07-25 10:31:26.250314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.644 [2024-07-25 10:31:26.254364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.644 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.645 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.645 [2024-07-25 10:31:26.263425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.645 [2024-07-25 10:31:26.263858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.645 [2024-07-25 10:31:26.263889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc38d0 with addr=10.0.0.2, port=4420 00:24:36.645 [2024-07-25 10:31:26.263906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc38d0 is same with the state(5) to be set 00:24:36.645 [2024-07-25 10:31:26.264170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc38d0 (9): Bad file descriptor 00:24:36.645 [2024-07-25 10:31:26.264437] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.645 [2024-07-25 10:31:26.264459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.645 [2024-07-25 10:31:26.264474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.645 [2024-07-25 10:31:26.265934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.645 [2024-07-25 10:31:26.268520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.645 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.645 10:31:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1583024 00:24:36.645 [2024-07-25 10:31:26.277818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.905 [2024-07-25 10:31:26.447812] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:46.888 00:24:46.888 Latency(us) 00:24:46.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.888 Verification LBA range: start 0x0 length 0x4000 00:24:46.888 Nvme1n1 : 15.01 5680.85 22.19 7747.28 0.00 9501.78 658.39 17087.91 00:24:46.888 =================================================================================================================== 00:24:46.888 Total : 5680.85 22.19 7747.28 0.00 9501.78 658.39 17087.91 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.888 rmmod nvme_tcp 00:24:46.888 rmmod nvme_fabrics 00:24:46.888 rmmod nvme_keyring 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1583536 ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1583536 ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583536' 00:24:46.888 killing process with pid 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1583536 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.888 10:31:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.271 00:24:48.271 real 0m22.147s 00:24:48.271 user 0m57.888s 00:24:48.271 sys 0m4.844s 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:48.271 ************************************ 00:24:48.271 END TEST nvmf_bdevperf 00:24:48.271 ************************************ 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.271 ************************************ 00:24:48.271 START TEST nvmf_target_disconnect 00:24:48.271 ************************************ 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.271 * Looking for test storage... 00:24:48.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.271 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.272 10:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:50.182 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:50.182 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:50.182 Found net devices under 0000:08:00.0: cvl_0_0 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.182 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:50.183 Found net devices under 0000:08:00.1: cvl_0_1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:24:50.183 00:24:50.183 --- 10.0.0.2 ping statistics --- 00:24:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.183 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:24:50.183 00:24:50.183 --- 10.0.0.1 ping statistics --- 00:24:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.183 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.183 ************************************ 00:24:50.183 START TEST nvmf_target_disconnect_tc1 00:24:50.183 ************************************ 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.183 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.183 [2024-07-25 10:31:39.757393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.183 [2024-07-25 10:31:39.757467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e30c0 with addr=10.0.0.2, port=4420 00:24:50.183 [2024-07-25 10:31:39.757521] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:50.183 [2024-07-25 10:31:39.757544] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:50.183 [2024-07-25 10:31:39.757559] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:50.183 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:50.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:50.183 Initializing NVMe Controllers 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.183 00:24:50.183 real 0m0.098s 00:24:50.183 user 0m0.045s 00:24:50.183 sys 0m0.053s 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:50.183 ************************************ 00:24:50.183 END TEST nvmf_target_disconnect_tc1 00:24:50.183 ************************************ 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.183 ************************************ 00:24:50.183 START TEST nvmf_target_disconnect_tc2 00:24:50.183 ************************************ 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.183 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1585963 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1585963 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1585963 ']' 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.184 10:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 [2024-07-25 10:31:39.885519] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:50.184 [2024-07-25 10:31:39.885617] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.184 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.184 [2024-07-25 10:31:39.952457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.443 [2024-07-25 10:31:40.073998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.443 [2024-07-25 10:31:40.074065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.443 [2024-07-25 10:31:40.074080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.443 [2024-07-25 10:31:40.074093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.443 [2024-07-25 10:31:40.074105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.443 [2024-07-25 10:31:40.074505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:50.443 [2024-07-25 10:31:40.074572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:50.443 [2024-07-25 10:31:40.074605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:50.443 [2024-07-25 10:31:40.074609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:50.443 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.443 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.444 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.702 Malloc0 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.702 [2024-07-25 10:31:40.245083] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.702 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.703 [2024-07-25 10:31:40.273404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1585991 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:50.703 10:31:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.667 10:31:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1585963 00:24:52.667 10:31:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 [2024-07-25 10:31:42.298352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 [2024-07-25 10:31:42.298874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Write completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 [2024-07-25 10:31:42.299263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.667 starting I/O failed 00:24:52.667 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Write completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 Read completed with error (sct=0, sc=8) 00:24:52.668 starting I/O failed 00:24:52.668 [2024-07-25 10:31:42.299659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.668 [2024-07-25 10:31:42.299906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.299964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.300217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.300265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.300428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.300489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.300742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.300798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.301023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.301075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.301330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.301383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.301581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.301631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.301839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.301881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.302104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.302130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.302324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.302372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.302550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.302578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.302711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.302737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.302951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.302977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.303195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.303221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.303460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.303492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.303710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.303758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.303940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.303966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.304168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.304218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.304396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.304424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.304657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.304685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.304855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.304883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.305068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.305118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.305312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.305339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.305541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.305569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.305741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.305768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.305992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.306020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.306162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.306219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.306431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.668 [2024-07-25 10:31:42.306458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.668 qpair failed and we were unable to recover it. 00:24:52.668 [2024-07-25 10:31:42.306690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.306752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.306998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.307054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.307299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.307329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.307539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.307567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.307691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.307720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.307883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.307911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.308090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.308140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.308288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.308342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.308533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.308561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.308747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.308774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.308961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.308987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.309172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.309199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.309414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.309441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.309619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.309673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.309854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.309907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.310048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.310074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.310248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.310380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.310412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.310515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.310542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.310735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.310787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.311008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.311034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.311247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.311298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.311539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.311566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.311824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.311878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.312059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.312104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.312285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.312311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.312452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.312511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.312634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.312661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.312857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.312906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.313117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.313166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.313351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.313378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.313499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.313546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.313716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.313766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.313886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.313913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.314063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.314090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.314301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.314348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.314545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.314572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.669 [2024-07-25 10:31:42.314729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.669 [2024-07-25 10:31:42.314781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.669 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.314988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.315014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.315149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.315175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.315412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.315462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.315628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.315654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.315851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.315877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.316097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.316151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.316354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.316381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.316565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.316592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.316738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.316764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.316976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.317002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.317151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.317178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.317348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.317377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.317557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.317584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.317712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.317764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.317998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.318173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.318338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.318515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.318688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.318877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.318943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.319134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.319160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.319335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.319386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.319569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.319596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.319753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.319808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.319940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.319991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.320209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.320235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.320432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.320493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.320664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.320715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.320837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.320863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.321053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.321101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.321211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.321239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.321433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.321460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.321691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.321733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.321945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.321996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.322144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.322193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.322329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.322384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.322575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.670 [2024-07-25 10:31:42.322636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.670 qpair failed and we were unable to recover it. 00:24:52.670 [2024-07-25 10:31:42.322770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.322798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.322966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.323022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.323181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.323209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.323391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.323440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.323648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.323702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.323892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.323945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.324874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.324981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.325209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.325262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.325437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.325463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.325661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.325688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.325880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.325927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.326159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.326212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.326437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.326494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.326668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.326694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.326863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.326912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.327339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.327563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.327724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.327882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.327985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.328011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.328159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.328212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.328447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.328511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.328677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.328724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.671 qpair failed and we were unable to recover it. 00:24:52.671 [2024-07-25 10:31:42.328840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.671 [2024-07-25 10:31:42.328867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.329077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.329126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.329312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.329360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.329556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.329754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.329781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.329951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.330008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.330157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.330205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.330422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.330470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.330669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.330695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.330855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.330881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.331088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.331137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.331270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.331296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.331435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.331463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.331639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.331692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.331847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.331899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.332017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.332044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.332294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.332320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.332458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.332716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.332769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.332990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.333039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.333242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.333268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.333478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.333533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.333733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.333782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.333918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.333971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.334139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.334190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.334324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.334350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.334549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.334577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.334789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.334836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.335064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.335121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.335254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.335279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.335486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.335519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.335747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.335795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.335999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.336058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.336281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.336307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.336511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.336559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.336828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.336876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.337011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.337062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.337276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.672 [2024-07-25 10:31:42.337329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.672 qpair failed and we were unable to recover it. 00:24:52.672 [2024-07-25 10:31:42.337544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.337570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.337760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.337785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.338009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.338058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.338221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.338271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.338441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.338506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.338634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.338660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.338829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.338876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.339063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.339112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.339282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.339335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.339510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.339542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.339712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.339738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.339895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.339948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.340110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.340163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.340300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.340328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.340509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.340550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.340751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.340780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.340887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.340913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.341115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.341168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.341369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.341396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.341557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.341583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.341743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.341771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.341966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.342018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.342182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.342237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.342380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.342433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.342620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.342646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.342833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.342859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.343069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.343117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.343229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.343258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.343463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.673 [2024-07-25 10:31:42.343522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.673 qpair failed and we were unable to recover it. 00:24:52.673 [2024-07-25 10:31:42.343683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.343730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.343929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.343982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.344153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.344211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.344347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.344373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.344535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.344562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.344757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.344812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.344990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.345039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.345289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.345339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.345502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.345552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.345685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.345737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.345886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.345936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.346184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.346209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.346458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.346512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.346689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.346741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.346848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.346875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.347041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.347095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.347249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.347302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.347472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.347503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.347712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.347759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.347903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.347955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.348145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.348354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.348572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.348706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.348837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.348983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.349036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.349161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.349217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.349429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.349488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.349735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.349761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.349943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.349992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.350228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.350279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.350408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.350688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.350736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.350891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.350942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.351146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.351196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.351391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.674 [2024-07-25 10:31:42.351441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.674 qpair failed and we were unable to recover it. 00:24:52.674 [2024-07-25 10:31:42.351586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.351637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.351803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.351851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.352058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.352107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.352260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.352313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.352475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.352540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.352767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.352818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.352973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.353026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.353136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.353365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.353414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.353514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.353546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.353688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.353737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.353985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.354011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.354168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.354218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.354472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.354504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.354725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.354774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.354997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.355023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.355218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.355262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.355466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.355522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.355698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.355750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.355952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.356005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.356172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.356219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.356425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.356472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.356665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.356713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.356873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.356923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.357079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.357131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.357349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.357401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.357521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.357561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.357761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.357809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.357909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.357935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.358100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.358155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.358345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.358394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.358500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.358527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.358752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.358804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.675 [2024-07-25 10:31:42.359015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.675 [2024-07-25 10:31:42.359063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.675 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.359310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.359336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.359531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.359584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.359781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.359835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.360036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.360084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.360291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.360338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.360502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.360553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.360686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.360740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.360909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.360957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.361134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.361183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.361396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.361422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.361561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.361614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.361817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.361864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.362015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.362067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.362224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.362441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.362504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.362688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.362739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.362916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.363155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.363209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.363340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.363513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.363541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.363679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.363705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.363832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.363879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.364090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.364139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.364342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.364391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.364624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.364682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.364851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.364897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.365018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.676 [2024-07-25 10:31:42.365079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.676 qpair failed and we were unable to recover it. 00:24:52.676 [2024-07-25 10:31:42.365282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.365336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.365605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.365654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.365910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.365959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.366064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.366120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.366322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.366375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.366590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.366617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.366774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.366831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.367003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.367061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.367271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.367319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.367524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.367579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.367733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.367760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.367934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.367987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.368148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.368203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.368395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.368445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.368644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.368696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.368943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.368973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.369217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.369243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.369450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.369505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.369687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.369745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.369989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.370207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.370367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.370531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.370726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.370920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.370969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.371148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.371195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.371332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.371358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.371499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.371552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.371770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.371820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.372037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.372087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.372219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.372278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.372525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.677 [2024-07-25 10:31:42.372552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.677 qpair failed and we were unable to recover it. 00:24:52.677 [2024-07-25 10:31:42.372683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.372739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.372971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.373020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.373202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.373254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.373364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.373392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.373529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.373584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.373705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.374014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.374041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.374244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.374294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.374510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.374567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.374714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.374764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.374953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.375007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.375160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.375205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.375343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.375399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.375579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.375630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.375807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.376046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.376100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.376294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.376321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.376504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.376707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.376760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.376972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.377024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.377177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.377229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.377393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.377420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.377617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.377644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.377854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.377910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.378089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.378145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.378304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.378357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.378539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.378566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.378672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.378848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.378898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.379105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.379155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.379304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.379352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.379510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.379562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.678 [2024-07-25 10:31:42.379808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.678 [2024-07-25 10:31:42.379834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.678 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.380014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.380064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.380227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.380285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.380422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.380473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.380736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.380762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.380921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.380979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.381158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.381212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.381401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.381428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.381586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.381640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.381807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.381835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.381977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.382029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.382191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.382241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.382425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.382478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.382702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.382760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.382879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.382944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.383157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.383212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.383428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.383487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.383608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.383634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.383877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.384128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.384179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.384306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.384359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.384573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.384600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.384779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.384836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.384995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.385047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.385277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.385327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.385534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.385591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.385778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.385828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.386011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.386063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.386212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.386260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.386412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.386461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.386664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.386715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.386959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.387193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.387242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.387425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.387487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.387626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.387709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.679 qpair failed and we were unable to recover it. 00:24:52.679 [2024-07-25 10:31:42.387817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.679 [2024-07-25 10:31:42.387845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.388007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.388050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.388181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.388234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.388428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.388476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.388599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.388627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.388817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.388843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.389001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.389060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.389305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.389331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.389524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.389552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.389680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.389734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.389924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.389973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.390183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.390237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.390394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.390421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.390633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.390685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.390857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.390883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.391075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.391101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.391318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.391368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.391522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.391575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.391676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.391702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.391854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.391906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.392043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.392071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.392267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.392320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.392454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.392514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.392708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.392756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.392859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.392886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.393067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.393094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.393251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.393311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.393441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.393468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.393668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.393715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.393862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.393914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.680 [2024-07-25 10:31:42.394046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.680 [2024-07-25 10:31:42.394099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.680 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.394344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.394370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.394565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.394592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.394810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.394864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.395050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.395107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.395354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.395380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.395594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.395657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.395859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.395906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.396078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.396130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.396272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.396324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.396569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.396596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.396791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.396842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.397089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.397140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.397331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.397388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.397531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.397590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.397807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.397857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.397963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.397990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.398156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.398211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.398353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.398571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.398622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.398775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.398826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.399045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.399099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.399328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.399379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.399540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.399567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.399736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.399789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.399977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.400026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.400224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.400274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.400500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.400550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.400784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.400832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.401107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.401154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.401293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.401342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.401581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.401608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.401887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.401936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.402172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.402223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.402457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.402514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.402622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.402648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.402873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.402915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.681 qpair failed and we were unable to recover it. 00:24:52.681 [2024-07-25 10:31:42.403159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.681 [2024-07-25 10:31:42.403185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.403377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.403424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.403608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.403660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.403799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.403849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.404019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.404047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.404233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.404283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.404455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.404515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.404677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.404737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.404983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.405009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.405191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.405253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.405515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.405542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.405717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.405771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.406014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.406040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.406253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.406302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.406465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.406524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.406742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.406793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.406983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.407028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.407202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.407255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.407460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.407521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.407667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.407716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.407831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.407857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.408115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.408164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.408344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.408393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.408503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.408530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.408779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.408806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.409030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.409077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.409253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.409306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.409506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.409547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.409766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.409814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.682 [2024-07-25 10:31:42.410005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.682 [2024-07-25 10:31:42.410059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.682 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.410255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.410305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.410497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.410545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.410653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.410681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.410878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.410905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.411083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.411109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.411270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.411321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.411534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.411560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.411762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.411808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.412024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.412075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.412290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.412337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.412535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.412562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.412765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.412816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.413009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.413056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.413238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.413287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.413514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.413541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.413736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.413790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.413992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.414044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.414235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.414287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.414498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.414548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.414721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.414782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.414993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.415043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.415197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.415250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.415446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.415473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.415662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.415711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.415828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.415894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.416116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.416142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.416357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.416411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.416633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.416684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.416862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.416914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.417031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.417094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.417324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.417377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.417550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.417588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.417788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.417842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.418102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.418152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.418360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.418412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.418526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.418552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.418788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.683 [2024-07-25 10:31:42.418833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.683 qpair failed and we were unable to recover it. 00:24:52.683 [2024-07-25 10:31:42.419048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.419248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.419306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.419467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.419526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.419697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.419751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.419965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.420012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.420242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.420297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.420535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.420562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.420742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.420791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.421011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.421062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.421217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.421270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.421468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.421506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.421746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.421796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.422019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.422071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.422302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.422355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.422522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.422548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.422746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.422772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.422949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.423001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.423158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.423210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.423399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.423453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.423648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.423675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.423851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.423905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.424103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.424152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.424312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.424376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.424597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.424647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.424836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.425035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.425086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.425376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.425425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.425582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.425634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.425764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.425820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.425996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.426021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.426217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.426264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.426403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.426456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.426660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.426687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.426872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.426919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.427024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.427052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.427235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.427290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.427441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.684 [2024-07-25 10:31:42.427499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.684 qpair failed and we were unable to recover it. 00:24:52.684 [2024-07-25 10:31:42.427657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.427684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.427877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.427930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.428152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.428209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.428439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.428507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.428723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.428772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.428937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.428996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.429156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.429182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.429406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.429455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.429624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.429653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.429824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.429881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.430073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.430121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.430314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.430369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.430580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.430659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.430942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.431004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.431353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.431453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.431670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.431734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.431966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.431993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.432193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.432233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.432502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.432552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.432773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.432832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.433162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.433220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.433627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.433734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.433989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.685 [2024-07-25 10:31:42.434051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.685 qpair failed and we were unable to recover it. 00:24:52.685 [2024-07-25 10:31:42.434314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.434373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.434687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.434761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.435048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.435117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.435352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.435425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.435633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.435710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.435973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.436032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.436291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.436359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.436539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.436601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.436912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.436971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.437272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.437342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.437679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.437750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.437949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.438015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.438234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.438293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.438506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.438556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.438889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.958 [2024-07-25 10:31:42.438948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.958 qpair failed and we were unable to recover it. 00:24:52.958 [2024-07-25 10:31:42.439148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.439198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.439518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.439582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.439872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.439931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.440255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.440317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.440558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.440621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.440815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.440886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.441177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.441237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.441532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.441559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.441846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.441908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.442123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.442196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.442499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.442699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.442769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.443085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.443145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.443466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.443553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.443846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.443905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.444239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.444312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.444498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.444567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.444763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.444832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.445034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.445092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.445389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.445448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.445724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.445782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.446007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.446033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.446298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.446358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.446623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.446683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.446874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.446945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.447142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.447200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.447516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.447561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.447775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.447863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.448140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.448199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.448468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.448539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.448908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.448967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.449162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.449208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.449504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.449563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.449851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.449910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.450268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.450326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.450651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.450678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.450904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.450975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.959 qpair failed and we were unable to recover it. 00:24:52.959 [2024-07-25 10:31:42.451342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.959 [2024-07-25 10:31:42.451399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.451573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.451601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.451963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.452024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.452353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.452423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.452719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.452778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.453109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.453168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.453513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.453575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.453863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.453922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.454215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.454277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.454587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.454631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.454838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.454910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.455200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.455258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.455634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.455696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.455998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.456058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.456387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.456445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.456800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.456859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.457129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.457187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.457375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.457416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.457626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.457677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.457903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.457958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.458202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.458250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.458463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.458523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.458662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.458713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.458823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.458851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.459056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.459083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.459212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.459271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.459547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.459574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.459791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.459846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.460022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.460049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.460180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.460232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.460412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.460472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.460644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.460702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.460923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.460976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.461202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.461252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.461486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.461514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.461712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.461760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.461920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.461972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.960 [2024-07-25 10:31:42.462193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.960 [2024-07-25 10:31:42.462243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.960 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.462519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.462598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.462798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.462871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.463169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.463230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.463537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.463611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.463841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.463904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.464116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.464191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.464582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.464664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.465011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.465058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.465365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.465423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.465713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.465771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.465920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.465967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.466148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.466204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.466370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.466426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.466558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.466589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.466787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.466836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.467067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.467116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.467302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.467353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.467541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.467569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.467816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.467867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.468014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.468069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.468230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.468280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.468560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.468610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.468837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.468891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.469127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.469180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.469348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.469395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.469510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.469537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.469698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.469725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.469969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.470035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.470337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.470396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.470713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.470773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.471002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.471065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.471269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.471341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.471643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.471715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.472056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.472114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.472413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.472473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.472729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.472755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.472964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.473025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.961 qpair failed and we were unable to recover it. 00:24:52.961 [2024-07-25 10:31:42.473221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.961 [2024-07-25 10:31:42.473267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.473496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.473546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.473716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.473745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.473929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.473979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.474146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.474203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.474391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.474418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.474538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.474566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.474853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.474903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.475009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.475036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.475198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.475249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.475439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.475501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.475677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.475729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.475954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.476001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.476115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.476142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.476391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.476455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.476706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.476739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.476998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.477057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.477385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.477443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.477728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.477787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.478053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.478117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.478417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.478476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.478690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.478761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.479054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.479108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.479344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.479401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.479573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.479625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.479789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.479817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.480085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.480137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.480244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.480271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.480493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.480551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.480744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.480792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.480961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.481015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.481133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.481199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.481392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.481443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.481654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.481708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.481900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.481950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.482111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.482143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.482250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.482276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.482378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.962 [2024-07-25 10:31:42.482404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.962 qpair failed and we were unable to recover it. 00:24:52.962 [2024-07-25 10:31:42.482613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.482662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.482789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.482843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.483000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.483061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.483223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.483282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.483511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.483700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.483752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.483925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.483980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.484197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.484247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.484410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.484464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.484691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.484744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.484921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.484976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.485139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.485193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.485333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.485386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.485549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.485599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.485715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.485741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.485954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.486108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.486135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.486341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.486367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.486500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.486527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.486715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.486772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.486986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.487012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.487221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.487271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.487467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.487531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.487714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.487771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.487979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.488047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.488295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.488357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.488563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.488616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.488798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.488874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.489168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.489226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.489522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.963 [2024-07-25 10:31:42.489585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.963 qpair failed and we were unable to recover it. 00:24:52.963 [2024-07-25 10:31:42.489870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.489928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.490132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.490208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.490510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.490551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.490746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.490803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.491070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.491129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.491338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.491385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.491558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.491625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.491931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.492004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.492214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.492286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.492595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.492656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.492864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.492905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.493181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.493256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.493435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.493506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.493855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.493914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.494239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.494297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.494566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.494627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.494799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.494853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.495231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.495288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.495628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.495691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.496005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.496069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.496299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.496325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.496658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.496731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.497106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.497165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.497415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.497475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.497758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.497819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.498025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.498086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.498423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.498497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.498787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.498827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.499121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.499180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.499496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.499547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.499750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.499804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.500094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.500155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.500322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.500389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.500570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.500630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.500937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.500985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.501299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.501360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.501560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.501623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.501926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.964 [2024-07-25 10:31:42.501984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.964 qpair failed and we were unable to recover it. 00:24:52.964 [2024-07-25 10:31:42.502279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.502602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.502665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.502857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.502928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.503266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.503323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.503695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.503757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.504014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.504075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.504427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.504499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.504783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.504841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.505051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.505122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.505430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.505508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.505734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.505794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.506021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.506082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.506365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.506427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.506688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.506714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.506965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.507024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.507394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.507467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.507863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.507923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.508269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.508330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.508631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.508927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.508988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.509304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.509331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.509569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.509634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.510021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.510080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.510394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.510454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.510751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.510813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.511119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.511179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.511478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.511570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.511920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.511990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.512322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.512381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.512670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.512733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.513036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.513094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.513298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.513370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.513739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.513799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.514007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.514050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.514398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.514460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.514688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.514763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.514970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.515057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.965 [2024-07-25 10:31:42.515317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.965 [2024-07-25 10:31:42.515378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.965 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.515710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.515771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.516080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.516142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.516474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.516507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.516870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.516930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.517117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.517166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.517530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.517591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.517924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.517982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.518286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.518347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.518671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.518732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.519025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.519084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.519419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.519477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.519714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.519756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.520073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.520133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.520431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.520518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.520784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.520843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.521093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.521167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.521452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.521561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.521850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.521909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.522201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.522226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.522535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.522562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.522800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.522859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.523216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.523274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.523580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.523641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.523939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.524001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.524267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.524326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.524620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.524682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.524994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.525055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.525267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.525317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.525672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.525732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.526101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.526159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.526327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.526377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.526628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.526704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.526955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.527013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.527250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.527310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.527615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.527678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.527933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.527995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.528344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.528407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.966 qpair failed and we were unable to recover it. 00:24:52.966 [2024-07-25 10:31:42.528723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.966 [2024-07-25 10:31:42.528769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.529156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.529225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.529590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.529652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.529949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.530008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.530280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.530339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.530662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.530713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.530972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.531022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.531293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.531352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.531634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.531695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.531976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.532344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.532405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.532629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.532690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.532954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.533015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.533314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.533339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.533718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.533782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.534070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.534129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.534363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.534423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.534784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.534845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.535194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.535254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.535523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.535586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.535901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.535960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.536240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.536298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.536596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.536659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.536831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.536884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.537062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.537107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.537424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.537508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.537729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.537790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.538066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.538128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.538452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.538530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.538851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.538910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.539247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.539308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.539583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.539645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.539914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.967 [2024-07-25 10:31:42.539972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.967 qpair failed and we were unable to recover it. 00:24:52.967 [2024-07-25 10:31:42.540191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.540253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.540563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.540591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.540869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.540895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.541159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.541218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.541542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.541570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.541856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.541916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.542196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.542257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.542435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.542498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.542819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.542852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.543069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.543117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.543351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.543409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.543746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.543808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.544089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.544450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.544477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.544816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.544874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.545156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.545217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.545558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.545617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.545829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.545887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.546163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.546224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.546473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.546548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.546844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.546886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.547133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.547186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.547446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.547529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.547807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.547866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.548162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.548188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.548530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.548592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.548936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.548995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.549301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.549359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.549591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.549617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.549910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.549973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.550202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.550229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.550543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.550570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.550798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.550824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.551173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.551231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.551525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.551554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.551767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.551829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.552109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.552168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.968 qpair failed and we were unable to recover it. 00:24:52.968 [2024-07-25 10:31:42.552455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.968 [2024-07-25 10:31:42.552531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.552666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.552727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.553044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.553103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.553470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.553541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.553874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.553933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.554173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.554199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.554460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.554494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.554776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.554834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.555140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.555200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.555427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.555453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.555748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.555810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.556010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.556076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.556350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.556413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.556709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.556770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.557105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.557165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.557494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.557544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.557769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.557796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.557989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.558050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.558396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.558454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.558719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.558781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.559020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.559046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.559283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.559345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.559676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.559740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.560018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.560076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.560333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.560394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.560596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.560651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.560821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.560875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.561071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.561130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.561417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.561477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.561697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.561758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.562076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.562136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.562454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.562532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.562860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.562920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.563194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.563253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.563450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.563535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.563777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.563838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.564138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.564199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.564463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.969 [2024-07-25 10:31:42.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.969 qpair failed and we were unable to recover it. 00:24:52.969 [2024-07-25 10:31:42.564768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.564826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.565101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.565160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.565443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.565528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.565784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.565842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.566139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.566197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.566520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.566567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.566802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.566862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.567045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.567104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.567291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.567349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.567666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.567727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.568025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.568086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.568275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.568334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.568656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.568683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.568925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.568983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.569270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.569330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.569613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.569674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.569897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.569961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.570235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.570293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.570533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.570597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.570856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.570882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.571156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.571215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.571545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.571576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.571907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.571968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.572210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.572268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.572591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.572638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.572920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.572977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.573318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.573381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.573666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.573692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.574036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.574095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.574399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.574459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.574756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.574817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.575020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.575082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.575418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.575760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.575819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.576068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.576129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.576378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.576421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.576600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.576652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.970 [2024-07-25 10:31:42.576855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.970 [2024-07-25 10:31:42.576913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.970 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.577203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.577264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.577540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.577600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.577869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.577929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.578127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.578187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.578492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.578556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.578822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.578881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.579136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.579163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.579380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.579442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.579748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.579810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.580055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.580116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.580277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.580335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.580649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.580730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.580974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.581037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.581242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.581299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.581478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.581513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.581702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.581771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.581972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.582030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.582240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.582301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.582579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.582943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.583005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.583281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.583340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.583692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.583752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.584024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.584081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.584382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.584408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.584713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.584757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.584972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.585353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.585411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.585635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.585703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.585903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.585965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.586257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.586316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.586617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.586680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.587010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.587069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.587312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.587339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.587671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.587730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.588056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.588115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.588315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.588373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.588610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.971 [2024-07-25 10:31:42.588637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.971 qpair failed and we were unable to recover it. 00:24:52.971 [2024-07-25 10:31:42.588908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.588968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.589152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.589210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.589495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.589556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.589840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.589902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.590099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.590148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.590418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.590493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.590759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.590818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.591024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.591085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.591291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.591352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.591645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.591704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.592018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.592076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.592249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.592611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.592671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.592836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.592881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.593200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.593258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.593509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.593551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.593784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.593845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.594224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.594282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.594594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.594664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.594939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.594965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.595231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.595288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.595508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.595568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.595855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.595917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.596113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.596175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.596334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.596392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.596661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.596706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.596987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.597045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.597301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.597327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.597567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.972 [2024-07-25 10:31:42.597626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.972 qpair failed and we were unable to recover it. 00:24:52.972 [2024-07-25 10:31:42.597925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.597983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.598286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.598348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.598683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.598744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.599029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.599091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.599358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.599417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.599779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.599839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.600140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.600198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.600380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.600429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.600742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.600801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.601124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.601182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.601463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.601536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.601835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.601893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.602105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.602186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.602383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.602451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.602750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.602809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.603088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.603145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.603528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.603555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.603844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.603902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.604227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.604285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.604492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.604540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.604713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.604776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.605102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.605160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.605405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.605431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.605682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.605757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.606117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.606175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.606420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.606468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.606826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.606852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.607174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.607233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.607440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.607531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.607831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.607902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.608203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.608262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.608535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.608594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.608885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.608946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.609207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.609267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.609546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.609573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.609846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.609906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.610084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.610150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.973 qpair failed and we were unable to recover it. 00:24:52.973 [2024-07-25 10:31:42.610385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.973 [2024-07-25 10:31:42.610413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.610694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.610753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.610922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.610970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.611177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.611234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.611443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.611537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.611802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.611872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.612122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.612177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.612410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.612469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.612727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.612753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.613060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.613128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.613374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.613432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.613770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.613832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.614098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.614160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.614422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.614511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.614753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.614812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.615127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.615393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.615464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.615728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.615754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.615954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.616225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.616274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.616513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.616541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.616800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.616862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.617075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.617132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.617336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.617381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.617596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.617654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.617879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.617967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.618307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.618366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.618706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.618765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.619075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.619101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.619346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.619371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.619599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.619673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.619889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.619963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.620208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.620288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.620575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.620630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.620908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.620966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.974 [2024-07-25 10:31:42.621277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.974 [2024-07-25 10:31:42.621303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.974 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.621527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.621587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.621883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.621940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.622239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.622296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.622639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.622699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.622993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.623051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.623396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.623454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.623740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.623799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.624009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.624083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.624409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.624466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.624793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.624819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.625112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.625169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.625468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.625567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.625884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.625943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.626235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.626292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.626539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.626565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.626865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.626891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.627191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.627249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.627512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.627572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.627873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.627931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.628141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.628220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.628552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.628818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.628877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.629123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.629150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.629465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.629875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.629934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.630172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.630227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.630516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.630556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.630822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.630880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.631149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.631207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.631407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.631507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.631725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.631790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.632080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.632153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.632381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.632444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.632660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.632716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.632907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.632958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.633231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.633296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.633502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.975 [2024-07-25 10:31:42.633561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.975 qpair failed and we were unable to recover it. 00:24:52.975 [2024-07-25 10:31:42.633682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.633708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.633812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.633837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.633934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.633960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.634074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.634104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.634303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.634387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.634703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.634763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.634965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.635026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.635219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.635282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.635471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.635533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.635741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.635794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.635942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.636220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.636248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.636414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.636440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.636646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.636703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.636841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.636888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.637047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.637074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.637258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.637287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.637463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.637545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.637743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.637795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.637944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.637999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.638107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.638135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.638296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.638324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.638431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.638457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.638635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.638692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.638856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.638911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.639043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.639096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.639284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.639339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.639561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.639610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.639886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.639940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.640117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.640171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.640342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.640393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.640510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.640539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.640710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.640738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.976 qpair failed and we were unable to recover it. 00:24:52.976 [2024-07-25 10:31:42.640848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.976 [2024-07-25 10:31:42.640876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.640991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.641017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.641166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.641220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.641397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.641451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.641704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.641776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.642021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.642081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.642277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.642337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.642569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.642631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.642868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.642930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.643135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.643180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.643386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.643445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.643675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.643702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.643983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.644042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.644247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.644319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.644550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.644578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.644821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.644884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.645178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.645239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.645464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.645499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.645702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.645774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.646033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.646094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.646309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.646386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.646671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.646732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.647017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.647076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.647391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.647452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.647809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.647870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.648129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.648400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.648721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.648774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.649075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.649133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.649356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.649429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.649653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.649679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.649908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.649967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.977 [2024-07-25 10:31:42.650258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.977 [2024-07-25 10:31:42.650318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.977 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.650528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.650602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.650887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.650950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.651125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.651185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.651390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.651440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.651620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.651675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.651893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.651943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.652148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.652197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.652380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.652625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.652652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.652866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.652916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.653096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.653149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.653277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.653337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.653519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.653568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.653743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.653798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.653963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb190 is same with the state(5) to be set 00:24:52.978 [2024-07-25 10:31:42.654269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.654321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.654495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.654523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.654690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.654719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.654949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.654975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.655946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.655972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.656155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.656220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.656467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.656525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.656718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.656774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.656952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.657006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.657143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.657196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.657407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.657454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.657626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.657679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.657836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.658028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.658059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.658278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.658334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.658510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.658563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.978 qpair failed and we were unable to recover it. 00:24:52.978 [2024-07-25 10:31:42.658726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.978 [2024-07-25 10:31:42.658753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.658862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.658889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.659091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.659155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.659355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.659413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.659636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.659697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.659863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.659934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.660111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.660290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.660316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.660505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.660724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.660780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.660953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.661005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.661180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.661233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.661404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.661463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.661632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.661660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.661791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.661849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.662893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.662944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.663116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.663170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.663368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.663430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.663653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.663943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.664005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.664221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.664272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.664503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.664555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.664794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.664867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.665049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.665100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.665300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.665351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.665565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.665634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.665841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.665889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.666105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.666158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.666316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.666367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.666527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.666570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.666752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.666778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.979 [2024-07-25 10:31:42.666932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.979 [2024-07-25 10:31:42.666958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.979 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.667118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.667166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.667323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.667351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.667506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.667552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.667675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.667701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.667848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.667897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.668057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.668123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.668303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.668359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.668552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.668602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.668765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.668797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.668950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.668977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.669127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.669175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.669313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.669367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.669534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.669563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.669735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.669791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.670018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.670069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.670295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.670354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.670556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.670628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.670814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.670876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.671147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.671215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.671406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.671463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.671635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.671666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.671822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.671869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.672037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.672089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.672243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.672294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.672458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.672513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.672652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.672739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.672881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.672930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.673108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.673163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.673333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.673381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.673675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.673704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.673811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.673838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.673952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.674168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.674221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.674386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.674443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.674682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.674758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.674922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.674977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.980 [2024-07-25 10:31:42.675089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.980 [2024-07-25 10:31:42.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.980 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.675271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.675315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.675432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.675498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.675608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.675634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.675775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.675826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.675976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.676140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.676276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.676450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.676715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.676902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.676955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.677111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.677171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.677325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.677378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.677550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.677609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.677803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.677859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.678018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.678046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.678205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.678359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.678385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.678528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.678581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.678778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.678834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.679039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.679106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.679309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.679372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.679570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.679609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.679828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.679889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.680136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.680203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.680404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.680431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.680667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.680718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.680900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.680953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.681136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.681162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.681338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.681389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.681523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.681575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.681755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.681810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.681999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.682056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.682217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.682273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.682453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.682514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.682697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.682750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.682899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.682949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.683081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.683132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.981 [2024-07-25 10:31:42.683299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.981 [2024-07-25 10:31:42.683356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.981 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.683547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.683574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.683744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.683770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.683992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.684043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.684244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.684293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.684475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.684547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.684692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.684746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.684873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.684902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.685016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.685044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.685153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.685181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.685359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.685413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.685588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.685643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.685812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.685865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.686076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.686126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.686238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.686318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.686512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.686752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.686801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.687927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.687955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.688156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.688207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.688379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.688433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.688593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.688659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.688832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.688885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.689100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.689150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.689300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.689356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.689493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.689542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.689691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.689743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.689931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.689980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.690111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.982 [2024-07-25 10:31:42.690169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.982 qpair failed and we were unable to recover it. 00:24:52.982 [2024-07-25 10:31:42.690348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.690377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.690485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.690512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.690673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.690724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.690887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.690943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.691142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.691197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.691384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.691431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.691608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.691662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.691816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.691842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.691991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.692057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.692229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.692286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.692431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.692478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.692602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.692630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.692737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.692762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.692963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.693028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.693246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.693309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.693543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.693571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.693786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.693847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.694081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.694107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.694331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.694357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.694648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.694707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.694986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.695044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.695324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.695384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.695668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.695718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.695908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.695957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.696132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.696189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.696394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.696447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.696631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.696689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.696870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.696898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.697068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.697094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.697247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.697275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.697434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.697464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.697691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.697757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.697954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.698006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.698174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.698226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.698465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.698563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.698801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.698854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.699092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.699142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.983 [2024-07-25 10:31:42.699316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.983 [2024-07-25 10:31:42.699370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.983 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.699478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.699511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.699647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.699697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.699896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.699949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.700053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.700079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.700286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.700335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.700517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.700545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.700727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.700774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.700944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.701000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.701185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.701240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.701394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.701420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.701627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.701680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.701900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.701955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.702106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.702172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.702373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.702421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.702607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.702950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.702999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.703169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.703223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.703391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.703417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.703627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.703969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.704041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.704236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.704308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.704633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.704692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.704896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.704949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.705237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.705294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.705499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.705548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.705761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.705820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.706011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.706073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.706279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.706353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.706519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.706575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.706922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.706992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.707239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.707296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.707545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.707599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.707818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.707874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.707988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.708015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.708200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.708258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.708440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.708730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.708756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.984 [2024-07-25 10:31:42.708909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.984 [2024-07-25 10:31:42.708959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.984 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.709165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.709233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.709425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.709512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.709682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.709724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.709935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.709962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.710171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.710197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.710475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.710507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.710806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.710865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.711077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.711139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.711415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.711476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.711716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.711778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.712025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.712050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.712369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.712428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.712704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.712753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.712894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.713148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.713198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.713421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.713468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.713682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.713736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.713915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.713968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.714200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.714227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.714412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.714438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.714639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.714686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.714887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.714937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.715108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.715160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.715318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.715370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.715531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.715558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.715719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.715764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.715956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.716010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.716127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.716315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.716373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.716569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.716596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.716778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.716828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.716986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.717013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.717173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.717201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.717379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.717430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.717575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.717626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.717830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.717879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.718027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.718097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.985 qpair failed and we were unable to recover it. 00:24:52.985 [2024-07-25 10:31:42.718329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.985 [2024-07-25 10:31:42.718383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.718609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.718659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.718859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.719053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.719270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.719541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.719673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.719819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.719964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.720013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.720238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.720429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.720487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.720690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.720742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.720898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.720926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.721034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.721061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.721220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.721281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:52.986 [2024-07-25 10:31:42.721478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.986 [2024-07-25 10:31:42.721535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:52.986 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.721665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.721722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.721918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.722102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.722151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.722354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.722410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.722605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.722667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.722852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.722903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.723107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.723161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.723317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.723350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.723515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.723541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.723700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.723753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.723946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.724013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.724282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.724341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.724582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.724609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.724899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.724970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.725161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.725187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.725343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.725372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.725532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.725560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.725729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.725784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.725968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.726024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.726180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.726206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.726375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.726430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.726658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.726685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.726961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.727019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.727209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.727280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.727555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.727616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.727917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.727943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.728238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.728295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.728545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.728594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.728702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.728729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.728929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.728976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.729109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.729160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.729337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.729390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.729551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.729611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.729780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.729835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.729967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.730050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.730341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.730391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.265 qpair failed and we were unable to recover it. 00:24:53.265 [2024-07-25 10:31:42.730600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.265 [2024-07-25 10:31:42.730649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.730811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.730864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.731064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.731112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.731296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.731344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.731500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.731532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.731685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.731733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.731917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.731945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.732170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.732218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.732390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.732461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.732795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.732857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.733083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.733141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.733329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.733375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.733529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.733555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.733847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.733906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.734096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.734357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.734422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.734634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.734686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.734792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.734818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.735035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.735086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.735245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.735306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.735474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.735540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.735746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.735797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.735995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.736047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.736206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.736266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.736472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.736530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.736657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.736709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.736853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.736901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.737074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.737134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.737288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.737341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.737504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.737562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.737838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.737900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.738230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.738298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.738607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.738649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.738917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.738944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.739217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.739293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.739495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.739537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.739720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.739793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.266 [2024-07-25 10:31:42.740052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.266 [2024-07-25 10:31:42.740120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.266 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.740498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.740556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.740754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.740817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.741096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.741158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.741447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.741533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.741770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.741829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.742021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.742066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.742350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.742408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.742626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.742665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.742938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.742995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.743183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.743241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.743439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.743528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.743720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.743780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.744067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.744128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.744409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.744468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.744720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.744773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.744945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.744999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.745210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.745260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.745408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.745449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.745638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.745689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.745843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.745889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.746098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.746161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.746473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.746550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.746832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.746891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.747110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.747171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.747542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.747569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.747867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.747921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.748120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.748168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.748309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.748361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.748531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.748775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.748824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.749032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.749086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.749227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.749274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.749463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.749522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.749752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.749786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.749913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.749967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.750132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.750186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.750439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.750495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.267 qpair failed and we were unable to recover it. 00:24:53.267 [2024-07-25 10:31:42.750700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.267 [2024-07-25 10:31:42.750752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.750897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.750947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.751049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.751075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.751269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.751295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.751476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.751530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.751716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.751743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.751898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.751959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.752162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.752210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.752392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.752442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.752601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.752667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.752831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.752879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.753089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.753136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.753341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.753388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.753612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.753661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.753855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.753913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.754373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.754430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.754629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.754679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.754787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.754814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.755041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.755095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.755323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.755351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.755551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.755602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.755769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.755827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.756908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.756934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.757036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.757062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.757163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.757190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.757344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.757398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.757562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.757619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.757849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.757896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.758178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.758228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.758395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.758448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.758605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.758655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.758766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.268 [2024-07-25 10:31:42.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.268 qpair failed and we were unable to recover it. 00:24:53.268 [2024-07-25 10:31:42.759002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.759051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.759187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.759241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.759449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.759506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.759699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.759752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.759970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.760022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.760192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.760248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.760529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.760582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.760732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.760757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.760940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.760991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.761173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.761200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.761383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.761433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.761565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.761616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.761766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.761792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.761959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.762011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.762191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.762241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.762364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.762422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.762539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.762565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.762749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.762818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.763029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.763079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.763319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.763377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.763538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.763566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.763803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.763853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.764034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.764084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.764238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.764286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.764472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.764507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.764726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.764786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.765079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.765138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.765392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.765451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.765661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.765733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.766013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.766071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.269 [2024-07-25 10:31:42.766392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.269 [2024-07-25 10:31:42.766460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.269 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.766681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.766743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.767085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.767144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.767349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.767421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.767670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.767890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.767917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.768122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.768174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.768418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.768469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.768683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.768737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.768895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.768927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.769119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.769172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.769376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.769429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.769652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.769712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.769925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.769972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.770094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.770121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.770297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.770351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.770547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.770836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.770895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.771091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.771161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.771348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.771395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.771609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.771638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.771808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.771836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.772061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.772112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.772283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.772340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.772533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.772561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.772727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.772793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.772898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.772924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.773061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.773114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.773298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.773346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.773586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.773650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.773936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.773998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.774201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.774240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.774531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.774557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.774814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.774876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.775148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.775206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.775462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.775493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.775654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.775684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.775919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.775971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.270 qpair failed and we were unable to recover it. 00:24:53.270 [2024-07-25 10:31:42.776139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.270 [2024-07-25 10:31:42.776194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.776348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.776397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.776543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.776598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.776834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.776884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.777046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.777099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.777266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.777321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.777532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.777807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.777855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.778007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.778070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.778287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.778338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.778537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.778564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.778771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.778828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.778992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.779047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.779268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.779321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.779449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.779513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.779724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.779787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.779999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.780049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.780193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.780244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.780446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.780506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.780703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.780729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.780881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.780932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.781038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.781065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.781297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.781362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.781571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.781627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.781798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.781852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.782047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.782075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.782286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.782337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.782531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.782578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.782813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.782864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.783031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.783085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.783193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.783221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.783425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.783485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.783664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.783717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.783856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.783907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.784038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.784092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.784336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.784392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.784589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.784631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.784781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.271 [2024-07-25 10:31:42.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.271 qpair failed and we were unable to recover it. 00:24:53.271 [2024-07-25 10:31:42.784987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.785045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.785229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.785282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.785447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.785512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.785667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.785709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.785882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.785935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.786103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.786161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.786328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.786381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.786568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.786619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.786767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.786816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.786947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.786995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.787218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.787268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.787468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.787523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.787691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.787744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.787901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.787928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.788176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.788225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.788493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.788541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.788755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.788820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.789007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.789059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.789310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.789369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.789529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.789558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.789784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.789812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.789995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.790043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.790217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.790276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.790494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.790543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.790703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.790730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.790889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.790938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.791211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.791272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.791512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.791560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.791771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.791818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.791967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.792020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.792343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.792402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.792649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.792708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.792925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.792987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.793223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.793249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.793475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.793507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.793784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.793843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.794126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.794185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.272 [2024-07-25 10:31:42.794529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.272 [2024-07-25 10:31:42.794592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.272 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.794786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.794859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.795140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.795198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.795427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.795504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.795763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.795825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.796150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.796208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.796414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.796496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.796687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.796736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.797027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.797078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.797270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.797322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.797501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.797559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.797714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.797741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.797898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.797961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.798165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.798215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.798370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.798430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.798628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.798681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.798843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.798894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.799097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.799154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.799352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.799404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.799595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.799661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.799867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.799941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.800215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.800287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.800651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.800709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.800906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.801213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.801271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.801575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.801641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.801926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.801984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.802147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.802199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.802423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.802507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.802695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.802721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.802978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.803052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.803291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.803353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.803540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.803609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.803885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.804133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.804200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.804436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.804508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.804765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.804823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.805005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.805070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.273 qpair failed and we were unable to recover it. 00:24:53.273 [2024-07-25 10:31:42.805264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.273 [2024-07-25 10:31:42.805290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.805551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.805578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.805749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.805813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.806013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.806059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.806251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.806320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.806562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.806831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.806903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.807149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.807208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.807423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.807498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.807693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.807719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.807962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.808020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.808256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.808313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.808521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.808593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.808888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.808949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.809141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.809468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.809540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.809703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.809772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.810035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.810061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.810295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.810354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.810553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.810608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.810744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.810798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.810975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.811026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.811150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.811180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.811344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.811394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.811877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.811908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.812144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.812303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.812364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.812553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.812610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.812793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.812850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.813011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.813038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.813192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.813218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.813361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.813413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.274 [2024-07-25 10:31:42.813521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.274 [2024-07-25 10:31:42.813550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.274 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.813793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.813860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.814115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.814142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.814394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.814452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.814685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.814758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.814969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.814995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.815251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.815309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.815529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.815557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.815763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.815825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.816019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.816088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.816288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.816350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.816572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.816635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.816825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.816897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.817086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.817154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.817353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.817400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.817579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.817646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.817847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.817922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.818120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.818186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.818434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.818506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.818726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.818785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.819009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.819071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.819385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.819664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.819739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.820022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.820081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.820319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.820345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.820615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.820674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.820899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.820958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.821166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.821247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.821457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.821550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.821755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.821813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.822007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.822078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.822272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.822331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.822534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.822577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.822831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.822903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.823089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.823162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.823357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.823405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.823626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.823653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.823935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.823993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.275 qpair failed and we were unable to recover it. 00:24:53.275 [2024-07-25 10:31:42.824287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.275 [2024-07-25 10:31:42.824341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.824523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.824577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.824719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.824773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.824966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.825014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.825186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.825241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.825387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.825446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.825611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.825886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.825945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.826150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.826195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.826417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.826478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.826830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.826889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.827096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.827169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.827438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.827507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.827731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.827789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.828023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.828080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.828313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.828371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.828646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.828703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.828917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.829146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.829200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.829384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.829437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.829616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.829673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.829929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.829976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.830080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.830106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.830212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.830237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.830355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.830384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.830536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.830612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.830806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.276 [2024-07-25 10:31:42.830877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.276 qpair failed and we were unable to recover it. 00:24:53.276 [2024-07-25 10:31:42.831188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.831248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.831444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.831518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.831798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.831857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.832080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.832153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.832433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.832508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.832718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.832793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.833022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.833081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.833277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.833350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.833623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.833672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.833815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.833869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.834037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.834089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.834255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.834312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.834494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.834544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.834733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.834785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.835007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.835061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.835218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.835244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.835433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.835492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.835704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.835757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.835893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.835936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.836097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.836155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.836314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.836365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.836509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.836558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.836710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.836736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.836948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.836996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.837163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.837220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.837389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.837444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.837564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.837593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.837767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.837825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.837969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.838019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.838187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.838251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.838429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.838492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.838704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.838753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.838938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.839114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.839174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.839352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.839411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.839526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.839553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.839712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.839740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.277 [2024-07-25 10:31:42.839915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.277 [2024-07-25 10:31:42.839969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.277 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.840123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.840318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.840539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.840566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.840763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.840829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.841087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.841147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.841355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.841381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.841595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.841864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.841924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.842124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.842168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.842342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.842402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.842525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.842553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.842724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.842778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.842967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.843020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.843154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.843208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.843344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.843400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.843560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.843588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.843739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.843798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.844006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.844058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.844251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.844316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.844535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.844597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.844738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.844819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.844979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.845041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.845215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.845277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.845460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.845530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.845697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.845725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.845873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.845917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.846084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.846148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.846267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.846332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.846521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.846587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.846791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.846863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.847049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.847101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.847380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.847448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.847699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.847762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.847981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.848035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.848218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.848273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.848435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.848502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.848637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.848688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.278 qpair failed and we were unable to recover it. 00:24:53.278 [2024-07-25 10:31:42.848898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.278 [2024-07-25 10:31:42.848951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.849119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.849176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.849392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.849456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.849726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.849799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.850025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.850083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.850307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.850369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.850624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.850685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.850889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.850942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.851070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.851136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.851335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.851384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.851564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.851620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.851778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.851807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.851940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.852001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.852160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.852216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.852388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.852444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.852595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.852646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.852817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.852872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.852994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.853054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.853160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.853188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.853328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.853377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.853535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.853562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.853727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.853792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.853952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.854003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.854211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.854278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.279 [2024-07-25 10:31:42.854453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.279 [2024-07-25 10:31:42.854518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.279 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.854680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.854733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.854844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.854872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.855043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.855100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.855252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.855319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.855497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.855552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.855777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.855825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.856053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.856103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.856271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.856325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.856476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.856542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.856719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.856781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.856971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.856997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.857128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.857370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.857423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.857546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.857574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.857759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.857807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.857935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.857986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.858162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.858222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.858405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.858460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.858608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.858657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.858765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.858920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.858970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.859104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.859154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.859308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.859358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.859563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.859618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.859778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.859806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.859920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.280 [2024-07-25 10:31:42.859948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.280 qpair failed and we were unable to recover it. 00:24:53.280 [2024-07-25 10:31:42.860108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.860162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.860353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.860407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.860680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.860746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.860952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.860992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.861188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.861247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.861450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.861539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.861722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.861782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.861979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.862243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.862315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.862527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.862575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.862725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.862760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.862935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.862993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.863154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.863207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.863343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.863385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.863509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.863537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.863658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.863684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.863834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.863861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.864013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.864044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.864203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.864261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.864416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.864442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.864630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.864681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.864862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.864913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.865030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.865059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.865203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.865255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.865414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.865445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.865585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.865660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.281 qpair failed and we were unable to recover it. 00:24:53.281 [2024-07-25 10:31:42.865856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-07-25 10:31:42.865928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.866129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.866201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.866406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.866474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.866692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.866763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.866961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.867032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.867227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.867294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.867499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.867573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.867805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.867863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.868131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.868189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.868446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.868523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.868798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.868856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.869090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.869153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.869301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.869343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.869536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.869582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.869775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.869828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.870864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.870891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.871000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.871028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.871198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.871256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.871436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.871501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.871669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.871701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.871831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.871881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.872009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.872066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.282 qpair failed and we were unable to recover it. 00:24:53.282 [2024-07-25 10:31:42.872240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.282 [2024-07-25 10:31:42.872293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.872456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.872512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.872647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.872694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.872884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.872934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.873094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.873147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.873320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.873375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.873529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.873561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.873667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.873694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.873810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.873837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.874050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.874196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.874421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.874572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.874783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.874976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.875161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.875332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.875508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.875703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.875932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.875993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.876159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.876221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.876383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.876434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.876666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.876721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.876943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.876991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.877112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.877150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.877321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.877374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.283 [2024-07-25 10:31:42.877529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.283 [2024-07-25 10:31:42.877575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.283 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.877731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.877759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.877903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.878077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.878126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.878312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.878360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.878528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.878559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.878767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.878842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.879085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.879150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.879324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.879375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.879518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.879789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.879840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.879994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.880046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.880187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.880242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.880400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.880460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.880610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.880659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.880809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.880858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.880994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.881052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.881224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.881279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.881442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.881500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.881724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.881751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.881909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.882104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.882153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.882306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.882332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.882511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.882572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.882711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.882760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.882936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.882991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.883153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.883220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.883392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.883448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.883642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.284 [2024-07-25 10:31:42.883698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.284 qpair failed and we were unable to recover it. 00:24:53.284 [2024-07-25 10:31:42.883807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.883834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.884028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.884073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.884244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.884298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.884461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.884530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.884680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.884746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.884850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.884876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.885005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.885068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.885231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.885279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.885449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.885515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.885677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.885708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.885875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.885932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.886114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.886169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.886323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.886352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.886513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.886559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.886757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.886812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.886977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.887031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.887194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.887255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.887388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.887450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.887654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.887709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.887906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.887956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.285 qpair failed and we were unable to recover it. 00:24:53.285 [2024-07-25 10:31:42.888154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.285 [2024-07-25 10:31:42.888209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.888396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.888448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.888666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.888718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.888905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.888957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.889143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.889170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.889329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.889357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.889526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.889580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.889744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.889798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.889949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.889997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.890105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.890133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.890294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.890321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.890504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.890551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.890731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.890786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.890955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.891011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.891170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.891220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.891396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.891452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.891622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.891679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.891831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.891896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.892053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.892115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.892286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.892341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.892514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.892567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.892705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.892756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.892935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.892988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.893147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.893175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.893339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.893398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.893563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.893592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.893748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.286 [2024-07-25 10:31:42.893800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.286 qpair failed and we were unable to recover it. 00:24:53.286 [2024-07-25 10:31:42.893966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.894174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.894381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.894578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.894781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.894924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.894951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.895116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.895171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.895342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.895399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.895523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.895557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.895734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.895791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.895961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.896022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.896203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.896260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.896413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.896440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.896589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.896635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.896810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.896864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.897031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.897059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.897204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.897259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.897416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.897467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.897656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.897706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.897813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.897839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.898008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.898059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.898206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.898255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.898397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.898475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.898648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.898700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.898862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.898921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.899078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.899107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.287 [2024-07-25 10:31:42.899281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.287 [2024-07-25 10:31:42.899334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.287 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.899464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.899555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.899685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.899742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.899843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.899875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.900034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.900096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.900246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.900299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.900449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.900521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.900681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.900847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.900876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.901024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.901082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.901248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.901309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.901463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.901540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.901684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.901729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.901902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.901945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.902111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.902166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.902333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.902388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.902569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.902625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.902799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.902852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.902956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.902985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.903112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.903171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.903336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.903390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.903585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.903633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.903826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.903874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.904022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.904081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.904235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.904390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.904416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.904602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.904653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.904840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.288 [2024-07-25 10:31:42.904889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.288 qpair failed and we were unable to recover it. 00:24:53.288 [2024-07-25 10:31:42.905056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.905266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.905396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.905566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.905800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.905955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.905981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.906153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.906203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.906370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.906421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.906573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.906628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.906796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.906823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.906973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.907169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.907297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.907434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.907656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.907823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.907889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.908058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.908259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.908310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.908476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.908544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.908717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.908773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.908913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.908966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.909161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.909210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.909386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.909438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.909603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.909648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.909798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.909826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.909998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.910200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.910379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.910594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.910729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.910918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.910977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.911160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.911215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.289 [2024-07-25 10:31:42.911351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.289 [2024-07-25 10:31:42.911416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.289 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.911628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.911682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.911897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.911947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.912126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.912334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.912361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.912520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.912699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.912749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.912891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.912935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.913094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.913147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.913299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.913362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.913514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.913565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.913738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.913794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.913901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.913929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.914106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.914156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.914325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.914380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.914548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.914608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.914764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.914791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.914926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.914979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.915153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.915210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.915368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.915419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.915576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.915635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.915798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.915852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.916037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.916199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.916226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.916397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.916455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.916638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.916723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.916923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.916998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.917177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.917241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.917463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.917540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.917744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.917802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.918074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.918137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.918336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.918399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.918694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.918765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.919028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.919075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.919236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.919295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.919454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.919544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.919728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.919781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.919910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.290 qpair failed and we were unable to recover it. 00:24:53.290 [2024-07-25 10:31:42.920133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.290 [2024-07-25 10:31:42.920191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.920317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.920366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.920508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.920553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.920777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.920828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.920995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.921049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.921223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.921276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.921407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.921459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.921639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.921692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.921796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.921822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.921981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.922033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.922195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.922221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.922350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.922399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.922561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.922611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.922855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.922923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.923108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.923180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.923369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.923438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.923698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.923764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.924042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.924100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.924260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.924315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.924529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.924556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.924763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.924839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.925050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.925111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.925360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.925432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.925685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.925744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.925931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.925990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.926184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.926224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.926490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.926532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.926706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.926760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.926910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.926951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.927146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.927198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.927369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.291 [2024-07-25 10:31:42.927428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.291 qpair failed and we were unable to recover it. 00:24:53.291 [2024-07-25 10:31:42.927575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.927604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.927744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.927827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.928036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.928092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.928215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.928242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.928451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.928510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.928661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.928716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.928861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.928908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.929079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.929134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.929315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.929370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.929541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.929600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.929810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.929863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.930043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.930098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.930256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.930314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.930535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.930563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.930709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.930759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.930918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.930979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.931170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.931222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.931375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.931402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.931604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.931654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.931819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.931875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.932039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.932096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.932203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.932231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.932473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.932551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.932794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.932853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.933127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.933188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.933400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.933470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.933734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.933792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.934013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.934054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.292 [2024-07-25 10:31:42.934357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.292 [2024-07-25 10:31:42.934415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.292 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.934622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.934684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.934957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.935015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.935326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.935610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.935671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.935876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.935902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.936178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.936249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.936560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.936628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.936824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.936897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.937089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.937160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.937351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.937398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.937597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.937666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.937862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.937931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.938212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.938272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.938555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.938590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.938805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.938864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.939108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.939167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.939499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.939567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.939785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.939846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.940198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.940256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.940459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.940541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.940711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.940765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.941124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.941182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.941504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.941573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.941809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.941868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.942081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.942150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.942428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.942495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.942764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.942822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.293 qpair failed and we were unable to recover it. 00:24:53.293 [2024-07-25 10:31:42.943004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.293 [2024-07-25 10:31:42.943062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.943341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.943399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.943625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.943993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.944051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.944382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.944453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.944684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.944711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.944927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.944996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.945226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.945274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.945384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.945412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.945618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.945669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.945824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.945885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.946136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.946184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.946378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.946428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.946544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.946573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.946778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.946832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.947088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.947136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.947323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.947373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.947594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.947641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.947868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.947918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.948168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.948227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.948365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.948391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.948548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.948574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.948725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.948774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.948956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.948982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.949114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.949171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.949374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.949421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.294 [2024-07-25 10:31:42.949535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.294 [2024-07-25 10:31:42.949563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.294 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.949669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.949696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.949839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.949865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.950966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.951104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.951163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.951270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.951297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.951541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.951569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.951746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.951772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.951932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.951958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.952124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.952180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.952348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.952398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.952559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.952806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.952879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.953198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.953256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.953451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.953530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.953752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.953825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.954034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.954088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.954277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.954303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.954457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.954491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.954646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.954692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.954933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.954986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.955169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.295 [2024-07-25 10:31:42.955223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.295 qpair failed and we were unable to recover it. 00:24:53.295 [2024-07-25 10:31:42.955359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.955408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.955580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.955630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.955812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.955864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.955963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.955989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.956158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.956184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.956385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.956432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.956640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.956691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.956869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.956926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.957055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.957114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.957289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.957340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.957544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.957596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.957817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.957870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.958067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.958094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.958204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.958231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.958402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.958453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.958677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.958728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.958901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.958957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.959167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.959235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.959431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.959716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.959757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.959933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.960007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.960318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.960381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.960625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.960653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.960966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.961217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.961288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.961491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.961531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.961707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.961753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.961926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.961994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.962293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.296 [2024-07-25 10:31:42.962319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.296 qpair failed and we were unable to recover it. 00:24:53.296 [2024-07-25 10:31:42.962548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.962602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.962775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.962844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.963171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.963232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.963533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.963596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.963867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.963925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.964129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.964188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.964467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.964538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.964716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.964778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.965120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.965179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.965451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.965542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.965847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.965909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.966194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.966255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.966576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.966646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.966931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.966989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.967182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.967227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.967418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.967464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.967694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.967767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.968088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.968145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.968410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.968699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.968760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.968946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.969014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.969290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.969370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.969620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.969680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.969878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.969950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.970162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.970222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.970408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.970475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.970825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.970883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.971102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.971165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.971361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.971408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.971636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.971709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.971913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.971938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.972155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.297 [2024-07-25 10:31:42.972223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.297 qpair failed and we were unable to recover it. 00:24:53.297 [2024-07-25 10:31:42.972532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.972559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.972844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.972915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.973113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.973373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.973415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.973629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.973687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.973888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.973959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.974245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.974270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.974477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.974536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.974704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.974774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.975062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.975087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.975305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.975363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.975555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.975624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.975818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.975885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.976229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.976287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.976572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.976632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.976951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.977020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.977205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.977273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.977595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.977666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.977919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.977979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.978298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.978558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.978634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.978972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.979044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.979363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.979420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.979629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.979699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.979984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.980042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.980319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.980389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.980654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.980727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.980918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.980964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.981107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.981158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.981393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.981459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.981758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.981817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.982025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.982101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.982295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.982366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.982703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.982761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.983040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.983288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.983315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.983545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.983590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.298 qpair failed and we were unable to recover it. 00:24:53.298 [2024-07-25 10:31:42.983774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.298 [2024-07-25 10:31:42.983827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.984063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.984315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.984374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.984581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.984635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.984786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.984849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.985020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.985071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.985258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.985307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.985516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.985568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.985670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.985696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.985907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.985955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.986166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.986215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.986367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.986415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.986537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.986565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.986811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.986864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.986972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.986999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.987198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.987251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.987410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.987472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.987656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.987708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.987895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.987955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.988238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.988287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.988500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.988553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.988745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.988794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.988923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.988976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.989079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.989105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.989238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.989288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.989470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.989527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.989732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.989785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.989973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.989999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.990224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.990275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.990404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.990456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.990713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.990764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.990965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.991011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.991236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.991289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.991500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.991552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.991763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.991810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.991979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.992033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.992259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.992309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.992464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.299 [2024-07-25 10:31:42.992496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.299 qpair failed and we were unable to recover it. 00:24:53.299 [2024-07-25 10:31:42.992730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.992783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.992944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.992990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.993208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.993255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.993439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.993499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.993708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.993758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.993925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.993959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.994170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.994222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.994528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.994556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.994716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.994744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.994991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.995042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.995267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.995320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.995532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.995586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.995760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.995812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.996005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.996053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.996258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.996310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.996507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.996536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.996748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.996799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.997002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.997052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.997242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.997295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.997506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.997555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.997770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.997823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.997924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.997951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.998140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.998187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.998367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.998418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.998647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.998694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.998802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.998829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.999060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.999110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.999292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.999340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.999543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.999590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.999779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:42.999805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:42.999990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.000040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.000144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.000170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.000364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.000415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.000569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.000836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.000888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.001096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.001148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.001256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.001284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.001442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.300 [2024-07-25 10:31:43.001497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.300 qpair failed and we were unable to recover it. 00:24:53.300 [2024-07-25 10:31:43.001665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.001712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.001816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.001842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.001999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.002049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.002207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.002258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.002503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.002555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.002794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.002841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.002980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.003061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.003233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.003291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.003477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.003538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.003645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.003671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.003799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.003856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.004085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.004138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.004308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.004354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.004545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.004571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.004755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.004805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.005033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.005084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.005275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.005325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.005455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.005516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.005745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.005797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.005993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.006041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.006221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.006274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.006469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.006531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.006711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.006763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.006919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.006962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.007121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.007327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.007375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.007599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.007651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.007790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.007873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.007983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.008011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.008214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.008265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.008444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.008501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.008667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.008717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.008868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.008919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.009093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.009155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.009318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.009381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.009606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.009656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.009813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.009840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.301 [2024-07-25 10:31:43.010041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.301 [2024-07-25 10:31:43.010089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.301 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.010277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.010329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.010533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.010562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.010697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.010753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.010858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.010884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.011034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.011086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.011300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.011354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.011546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.011573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.011775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.011828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.011998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.012054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.012294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.012346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.012593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.012641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.012891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.012941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.013112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.013139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.013295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.013344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.013532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.013578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.013712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.013764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.013961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.014013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.014200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.014254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.014441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.014498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.014658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.014685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.014822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.014989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.015042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.015258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.015315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.015506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.015558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.015754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.015805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.015994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.016047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.016154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.016182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.016393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.016442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.016634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.016690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.016893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.016945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.017078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.302 [2024-07-25 10:31:43.017131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.302 qpair failed and we were unable to recover it. 00:24:53.302 [2024-07-25 10:31:43.017366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.017415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.017597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.017650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.017783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.017834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.018042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.018093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.018273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.018434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.018466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.018604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.018662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.018845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.018898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.019113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.019164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.019385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.019435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.019606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.019633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.019805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.019860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.020024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.020052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.020277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.020328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.020584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.020640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.020837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.020888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.021068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.021117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.021244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.021304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.021531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.021559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.021745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.021801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.022011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.022067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.022224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.022253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.022387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.022441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.022593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.022643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.022808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.022856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.023083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.023133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.023384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.023441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.303 qpair failed and we were unable to recover it. 00:24:53.303 [2024-07-25 10:31:43.023607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.303 [2024-07-25 10:31:43.023664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.023939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.023988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.024118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.024180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.024353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.024407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.024587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.024640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.024847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.024918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.025157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.025438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.025496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.025653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.025703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.025871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.025924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.026089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.026146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.593 qpair failed and we were unable to recover it. 00:24:53.593 [2024-07-25 10:31:43.026435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.593 [2024-07-25 10:31:43.026490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.026598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.026624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.026736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.026762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.026966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.027021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.027234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.027300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.027484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.027533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.027816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.027874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.028171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.028230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.028449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.028551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.028778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.028837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.029031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.029392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.029450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.029751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.029810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.030138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.030200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.030438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.030735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.030796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.031080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.031137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.031454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.031536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.031695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.031758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.031995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.032058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.032238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.032287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.032447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.032510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.594 qpair failed and we were unable to recover it. 00:24:53.594 [2024-07-25 10:31:43.032665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.594 [2024-07-25 10:31:43.032711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.032895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.032948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.033054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.033082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.033277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.033327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.033504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.033560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.033757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.033809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.033968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.033994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.034099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.034126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.034235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.034261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.034441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.034542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.034759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.034789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.034954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.034981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.035200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.035252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.035444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.035504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.035609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.035637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.035819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.035868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.036054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.036104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.036325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.036376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.036587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.036641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.036868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.036916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.037078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.037138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.037257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.037284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.037465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.037524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.037675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.037724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.037963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.038011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.038209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.038261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.595 qpair failed and we were unable to recover it. 00:24:53.595 [2024-07-25 10:31:43.038469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.595 [2024-07-25 10:31:43.038525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.038724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.038779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.038951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.039180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.039388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.039542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.039760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.039892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.039919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.040105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.040160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.040331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.040385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.040566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.040615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.040889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.040937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.041045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.041074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.041292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.041349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.041548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.041587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.041766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.041815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.041992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.042045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.042146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.042171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.042360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.042408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.042632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.042870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.042920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.043089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.043143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.043335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.043386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.043585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.043635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.043841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.043899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.044073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.596 [2024-07-25 10:31:43.044125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.596 qpair failed and we were unable to recover it. 00:24:53.596 [2024-07-25 10:31:43.044295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.044351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.044546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.044573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.044783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.044831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.044995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.045052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.045196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.045249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.045440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.045498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.045698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.045749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.045857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.045885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.046070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.046117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.046274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.046300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.046527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.046554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.046712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.046772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.046877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.046902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.047069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.047124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.047340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.047393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.047582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.047771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.047823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.048012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.048063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.048227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.048284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.048455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.048512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.048668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.048729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.048873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.048915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.049085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.049138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.049320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.049370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.049574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.049640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.049814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.049868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.597 [2024-07-25 10:31:43.050077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.597 [2024-07-25 10:31:43.050149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.597 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.050329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.050357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.050538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.050566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.050773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.050822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.050999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.051050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.051239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.051288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.051418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.051709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.051752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.051925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.051979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.052160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.052216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.052384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.052438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.052613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.052672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.052834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.052883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.053057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.053109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.053294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.053554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.053582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.053795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.053838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.054013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.054073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.054222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.054275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.054467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.054528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.054666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.054719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.054901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.054928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.055145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.055198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.055389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.055445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.055634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.055700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.055898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.055942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.056078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.056161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.056332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.056381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.056526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.598 [2024-07-25 10:31:43.056593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.598 qpair failed and we were unable to recover it. 00:24:53.598 [2024-07-25 10:31:43.056770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.056999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.057058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.057215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.057276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.057379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.057405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.057524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.057551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.057738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.057792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.057985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.058040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.058178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.058230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.058392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.058442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.058593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.058644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.058806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.058863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.059032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.059085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.059257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.059310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.059423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.059600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.059683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.059889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.059945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.060077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.060127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.060283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.060343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.060475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.060553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.060765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.060815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.060961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.060988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.061185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.061236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.061404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.061459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.061606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.061660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.061831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.061883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.062039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.062100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.062270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.062322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.599 [2024-07-25 10:31:43.062435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.599 [2024-07-25 10:31:43.062462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.599 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.062670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.062722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.062852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.062908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.063013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.063038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.063197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.063223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.063393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.063448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.063654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.063709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.063872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.063930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.064036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.064230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.064280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.064461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.064517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.064668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.064721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.064882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.064950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.600 [2024-07-25 10:31:43.065056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.600 [2024-07-25 10:31:43.065083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.600 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.065239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.065290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.065447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.065475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.065607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.065665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.065800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.065883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.066054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.066109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.066211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.066236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.066403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.066460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.066592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.066652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.066827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.066881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.067034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.067082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.067246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.067307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.067468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.067509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.067719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.067774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.067926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.067991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.068212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.068265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.068429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.068498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.068703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.068754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.068919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.068972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.069131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.069189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.069423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.069478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.069671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.069725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.069890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.069946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.601 qpair failed and we were unable to recover it. 00:24:53.601 [2024-07-25 10:31:43.070088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.601 [2024-07-25 10:31:43.070141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.070271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.070328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.070497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.070553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.070767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.070815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.071025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.071075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.071179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.071205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.071417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.071471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.071638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.071692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.071853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.071895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.072026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.072080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.072212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.072270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.072377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.072405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.072545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.072606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.072771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.072829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.073067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.073117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.073298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.073349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.073564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.073618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.073724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.073749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.073983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.074026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.074159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.074208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.074313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.074340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.074541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.074568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.074737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.074791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.075030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.075077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.075227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.075292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.075396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.075607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.075658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.075857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.602 [2024-07-25 10:31:43.075907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.602 qpair failed and we were unable to recover it. 00:24:53.602 [2024-07-25 10:31:43.076124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.076172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.076384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.076434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.076566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.076593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.076782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.076831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.077028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.077055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.077212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.077239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.077409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.077462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.077634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.077689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.077914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.077964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.078147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.078197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.078378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.078429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.078561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.078612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.078777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.078802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.078953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.078997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.079157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.079185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.079350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.079405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.079522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.079550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.079660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.079687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.079797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.079823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.080052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.080104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.080217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.080245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.080402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.603 [2024-07-25 10:31:43.080468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.603 qpair failed and we were unable to recover it. 00:24:53.603 [2024-07-25 10:31:43.080693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.080759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.080934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.080986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.081121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.081174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.081392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.081445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.081645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.081696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.081883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.081932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.082099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.082158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.082334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.082388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.082577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.082628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.082815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.082863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.083005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.083086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.083231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.083279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.083410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.083501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.083682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.083731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.083920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.083967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.084235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.084286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.084437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.084491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.084634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.084682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.084843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.084905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.085063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.085125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.085317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.085370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.085511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.085555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.085711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.085739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.085915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.085973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.086127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.086191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.086363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.086555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.086604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.086760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.086789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.604 qpair failed and we were unable to recover it. 00:24:53.604 [2024-07-25 10:31:43.086914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.604 [2024-07-25 10:31:43.086977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.087082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.087109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.087224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.087250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.087349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.087375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.087552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.087580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.087770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.087825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.088951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.088977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.089141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.089194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.089311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.089338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.089459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.089497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.089657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.089709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.089899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.089956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.090086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.090291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.090425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.090618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.090840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.090987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.091038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.091145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.091172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.605 [2024-07-25 10:31:43.091297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.605 [2024-07-25 10:31:43.091355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.605 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.091537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.091581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.091749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.091800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.091904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.091930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.092930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.092983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.093133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.093159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.093342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.093395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.093584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.093633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.093786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.093814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.093949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.093999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.094149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.094197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.094351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.094399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.094582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.094637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.094788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.094838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.095075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.095125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.095297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.095351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.095525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.095575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.095695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.095721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.095885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.095939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.606 qpair failed and we were unable to recover it. 00:24:53.606 [2024-07-25 10:31:43.096059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.606 [2024-07-25 10:31:43.096118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.096226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.096253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.096363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.096389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.096557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.096584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.096720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.096770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.096902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.096953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.097110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.097160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.097328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.097384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.097493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.097519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.097672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.097899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.097961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.098118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.098170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.098336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.098390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.098496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.098523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.098706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.098760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.098917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.098974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.099163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.099216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.607 [2024-07-25 10:31:43.099337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.607 [2024-07-25 10:31:43.099402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.607 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.099569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.099626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.099751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.099942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.099998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.100127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.100186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.100352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.100407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.100530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.100585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.100762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.100817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.100997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.101050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.101211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.101238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.101397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.101459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.101603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.101687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.101863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.101914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.102064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.102128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.102293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.102351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.102550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.102576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.102712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.102795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.102897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.102923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.103095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.103150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.103261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.103288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.103409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.103435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.103610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.103664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.103819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.103882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.104047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.104104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.104240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.104283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.104530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.608 [2024-07-25 10:31:43.104737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.608 [2024-07-25 10:31:43.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.608 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.104938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.105003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.105182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.105231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.105381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.105447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.105645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.105691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.105827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.105871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.106031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.106083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.106259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.106318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.106509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.106553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.106706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.106732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.106905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.106958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.107112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.107159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.107297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.107379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.107538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.107586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.107720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.107767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.107907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.108147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.108205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.108377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.108430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.108548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.108575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.108731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.108780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.108940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.109002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.109141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.109212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.109396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.109450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.109664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.109717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.109882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.109938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.110075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.110117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.609 qpair failed and we were unable to recover it. 00:24:53.609 [2024-07-25 10:31:43.110249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.609 [2024-07-25 10:31:43.110330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.110510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.110710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.110761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.110935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.110987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.111146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.111173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.111304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.111374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.111551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.111609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.111754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.111798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.112034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.112113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.112306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.112375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.112567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.112639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.112830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.112894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.113213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.113271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.113455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.113538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.113730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.113798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.114011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.114071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.114270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.114341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.114517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.114571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.114777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.114838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.115122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.115179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.115495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.115551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.115741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.115817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.116105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.116165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.116346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.116392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.116596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.116671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.610 [2024-07-25 10:31:43.116859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.610 [2024-07-25 10:31:43.116899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.610 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.117071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.117097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.117270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.117334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.117571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.117632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.117904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.117962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.118169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.118244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.118438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.118532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.118714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.118774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.119079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.119136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.119299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.119324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.119542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.119596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.119748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.119792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.119895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.119923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.120065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.120114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.120281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.120335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.120510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.120564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.120695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.120750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.120923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.120975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.121172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.121220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.121436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.121505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.121694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.121741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.121867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.121921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.122836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.122890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.611 qpair failed and we were unable to recover it. 00:24:53.611 [2024-07-25 10:31:43.123042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.611 [2024-07-25 10:31:43.123104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.123267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.123323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.123545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.123572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.123708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.123758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.123968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.124018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.124169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.124196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.124385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.124452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.124672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.124965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.125039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.125321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.125379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.125588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.125661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.125854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.125919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.126102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.126358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.126424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.126749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.126806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.127016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.127045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.127214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.127268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.127455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.127511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.127647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.127695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.127837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.127888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.128017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.128065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.128270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.128323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.128506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.128560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.128681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.128732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.128836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.128862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.129081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.129108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.612 [2024-07-25 10:31:43.129214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.612 [2024-07-25 10:31:43.129242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.612 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.129471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.129543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.129763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.129818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.130017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.130087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.130251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.130311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.130538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.130592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.130848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.130904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.131098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.131155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.131351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.131399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.131523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.131551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.131724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.131750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.131884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.131937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.132136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.132188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.132290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.132316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.132458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.132489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.132625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.132677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.132847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.132900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.133063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.133118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.133263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.133314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.133467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.133524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.133738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.133797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.133971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.134021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.134170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.134223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.134429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.134493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.134653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.134680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.134812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.134857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.135000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.613 [2024-07-25 10:31:43.135056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.613 qpair failed and we were unable to recover it. 00:24:53.613 [2024-07-25 10:31:43.135164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.135191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.135348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.135377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.135578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.135639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.135899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.135956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.136146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.136210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.136391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.136452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.136645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.136699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.137015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.137068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.137347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.137402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.137629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.137687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.137966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.138021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.138323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.138379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.138680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.138736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.138940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.139185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.139229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.139434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.139460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.139678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.139746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.139951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.140003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.614 [2024-07-25 10:31:43.140166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.614 [2024-07-25 10:31:43.140225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.614 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.140370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.140420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.140558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.140609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.140752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.140794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.140963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.141021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.141267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.141294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.141512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.141581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.141753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.141803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.141976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.142006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.142199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.142249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.142420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.142473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.142711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.142760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.142977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.143030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.143170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.143218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.143410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.143463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.143602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.143630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.143795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.143852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.144006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.144039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.144276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.144330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.144489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.144517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.144690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.144745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.144900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.144956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.145156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.145205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.145367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.145419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.145627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.145676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.615 [2024-07-25 10:31:43.145836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.615 [2024-07-25 10:31:43.145897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.615 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.146058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.146116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.146335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.146362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.146516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.146565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.146675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.146704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.146844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.146895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.147091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.147141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.147343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.147393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.147509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.147542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.147720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.147775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.147943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.147994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.148854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.149101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.149152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.149290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.149544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.149586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.149854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.149905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.150098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.150360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.150423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.150627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.150688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.616 qpair failed and we were unable to recover it. 00:24:53.616 [2024-07-25 10:31:43.150949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.616 [2024-07-25 10:31:43.151011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.151208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.151234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.151448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.151529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.151749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.151798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.151985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.152037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.152223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.152286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.152474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.152530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.152704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.152757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.152900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.152958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.153077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.153103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.153267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.153321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.153467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.153525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.153640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.153667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.153832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.153885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.154065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.154091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.154288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.154336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.154515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.154564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.154670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.154697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.154862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.154915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.155048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.155101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.155276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.155326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.155473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.155542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.155689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.155744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.155916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.156128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.156187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.156347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.156395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.617 [2024-07-25 10:31:43.156555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.617 [2024-07-25 10:31:43.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.617 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.156754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.156806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.156989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.157033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.157235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.157298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.157515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.157543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.157740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.157790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.158031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.158081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.158279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.158344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.158520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.158550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.158736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.158794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.158974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.159040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.159239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.159293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.159518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.159564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.159755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.159808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.160026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.160077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.160301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.160351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.160562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.160620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.160749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.160804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.160951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.160994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.161127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.161181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.161356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.161409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.161556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.161603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.161767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.161823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.161958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.162011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.162150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.162202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.618 [2024-07-25 10:31:43.162380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.618 [2024-07-25 10:31:43.162407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.618 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.162508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.162535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.162687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.162714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.162877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.162904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.163078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.163128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.163306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.163362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.163498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.163553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.163712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.163759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.163912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.163973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.164186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.164263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.164544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.164616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.164865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.164924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.165204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.165266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.165453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.165485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.165711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.165781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.166125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.166183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.166404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.166466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.166754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.166815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.166992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.167291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.167529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.167585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.167747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.167791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.167961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.167987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.168257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.168329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.168574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.168648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.168840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.168866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.169114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.169172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.619 qpair failed and we were unable to recover it. 00:24:53.619 [2024-07-25 10:31:43.169442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.619 [2024-07-25 10:31:43.169513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.169736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.169807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.169973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.170031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.170182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.170386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.170413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.170549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.170610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.170783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.170836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.171102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.171155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.171383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.171433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.171670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.171722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.171958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.172171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.172351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.172567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.172792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.172956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.172986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.173273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.173336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.173576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.173637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.173797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.173855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.174165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.174229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.174540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.174566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.174763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.174822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.175144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.175214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.175440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.175510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.175670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.175725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.175872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.175914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.176101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.176149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.620 qpair failed and we were unable to recover it. 00:24:53.620 [2024-07-25 10:31:43.176310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.620 [2024-07-25 10:31:43.176363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.176599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.176657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.176840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.177025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.177076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.177225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.177288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.177504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.177559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.177740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.177792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.177943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.177994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.178200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.178249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.178386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.178438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.178601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.178627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.178766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.178819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.179046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.179095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.179212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.179276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.179508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.179558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.179773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.179820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.180056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.180276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.180493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.621 [2024-07-25 10:31:43.180576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.621 qpair failed and we were unable to recover it. 00:24:53.621 [2024-07-25 10:31:43.180794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.180853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.181128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.181190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.181508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.181553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.181801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.181860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.182183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.182252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.182547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.182614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.182748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.182797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.182974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.183023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.183151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.183210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.183365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.183392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.183614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.183663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.183947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.184167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.184222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.184370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.184433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.184568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.184622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.184804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.184829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.184973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.185124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.185306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.185439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.185658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.185856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.185908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.186068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.186121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.186348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.186374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.186648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.186698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.186867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.186924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.187080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.187131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.622 [2024-07-25 10:31:43.187329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.622 [2024-07-25 10:31:43.187381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.622 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.187651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.187703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.187906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.187952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.188144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.188194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.188426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.188475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.188678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.188734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.188908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.188963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.189151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.189178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.189398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.189448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.189637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.189687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.189821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.189875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.190078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.190127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.190348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.190396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.190550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.190613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.190723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.190749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.190855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.190882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.191082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.191134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.191336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.191401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.191704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.191730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.191953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.191979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.192265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.192324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.192579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.192641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.192842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.192917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.193176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.193231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.193384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.193446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.193559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.193585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.193693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.193720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.193942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.193996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.194145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.194196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.194408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.623 [2024-07-25 10:31:43.194472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.623 qpair failed and we were unable to recover it. 00:24:53.623 [2024-07-25 10:31:43.194776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.195120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.195177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.195544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.195623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.195841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.195900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.196277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.196334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.196563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.196606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.196845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.196904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.197068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.197117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.197325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.197351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.197558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.197605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.197887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.197914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.198201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.198227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.198530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.198556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.198732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.198795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.198961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.198989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.199168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.199224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.199398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.199452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.199694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.199744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.199947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.199997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.200219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.200268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.200440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.200498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.200608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.200635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.200821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.200847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.201078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.201125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.201282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.201342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.201631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.201781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.201806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.201996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.202042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.202250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.202303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.202430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.202493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.202613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.202639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.202807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.624 [2024-07-25 10:31:43.202860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.624 qpair failed and we were unable to recover it. 00:24:53.624 [2024-07-25 10:31:43.203053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.203118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.203317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.203358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.203533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.203560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.203867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.203945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.204231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.204290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.204490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.204531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.204749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.204818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.205097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.205155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.205463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.205493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.205653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.205681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.205819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.205872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.205980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.206006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.206217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.206264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.206408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.206462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.206753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.206801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.206957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.207017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.207216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.207265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.207461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.207520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.207711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.207737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.207868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.207923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.208167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.208220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.208378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.208421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.208725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.208778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.208974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.209024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.209249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.209306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.209528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.209555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.209722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.209747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.210044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.210097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.210219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.210279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.210429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.210498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.210726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.210778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.210885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.210911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.211019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.211047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.211234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.211264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.211551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.211889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.211947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.212112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.212169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.212497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.212557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.212772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.212813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.213054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.213112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.213488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.213534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.213742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.213800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.214114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.214172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.214497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.214916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.214974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.215236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.215305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.215519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.215592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.215878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.216222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.216274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.216471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.216536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.216743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.216769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.216948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.216999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.217152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.217217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.217399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.217453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.217720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.217769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.217978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.218027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.218196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.218252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.218404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.218460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.218793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.218852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.219050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.219123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.219427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.219453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.219690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.219738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.625 qpair failed and we were unable to recover it. 00:24:53.625 [2024-07-25 10:31:43.219895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.625 [2024-07-25 10:31:43.219923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.220126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.220176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.220401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.220455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.220630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.220664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.220851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.220903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.221142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.221168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.221432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.221488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.221649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.221868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.221893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.222070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.222126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.222302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.222353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.222518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.222573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.222780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.222821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.223063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.223124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.223323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.223396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.223722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.223780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.223994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.224068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.224398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.224690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.224749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.225050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.225108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.225381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.225439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.225703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.225780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.226037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.226063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.226338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.226363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.226716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.226776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.227092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.227150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.227310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.227371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.227746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.227816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.228009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.228078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.228280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.228360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.228555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.228614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.228778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.228830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.229160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.229219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.229504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.229548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.229713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.229772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.229961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.230029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.230234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.230306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.230541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.230613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.230899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.230957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.231144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.231213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.231509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.231557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.231822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.231891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.232084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.232134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.232451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.232528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.232793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.232822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.232963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.233013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.233253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.233301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.233452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.233478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.233751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.233802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.233968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.626 [2024-07-25 10:31:43.234023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.626 qpair failed and we were unable to recover it. 00:24:53.626 [2024-07-25 10:31:43.234202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.234256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.234394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.234421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.234568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.234620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.234724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.234750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.234963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.235011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.235299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.235346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.235528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.235593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.235955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.236014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.236306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.236366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.236596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.236659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.236986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.237048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.237303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.237370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.237671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.237726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.237898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.237953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.238113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.238140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.238394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.238444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.238587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.238614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.238828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.238878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.239141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.239192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.239456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.239511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.239667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.239712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.239882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.239936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.240922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.240948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.241218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.241268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.241437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.241498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.241712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.241762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.241949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.241977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.242218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.242283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.242541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.242588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.242816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.242877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.243192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.243250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.243528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.243554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.243812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.243870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.244195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.244256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.244448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.244533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.244737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.244807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.245123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.245149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.245349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.245396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.245577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.245637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.245838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.245897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.246101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.246146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.246404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.246462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.246704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.246763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.247055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.247113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.247434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.247506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.247716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.247789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.248081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.248142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.248370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.248432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.248717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.248770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.248982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.249034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.249169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.249217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.249427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.249478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.249600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.249626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.249755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.627 qpair failed and we were unable to recover it. 00:24:53.627 [2024-07-25 10:31:43.249976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.627 [2024-07-25 10:31:43.250030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.250292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.250345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.250520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.250570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.250775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.250828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.251059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.251086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.251296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.251351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.251516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.251560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.251743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.251792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.251999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.252048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.252255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.252302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.252473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.252539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.252802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.252852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.253042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.253093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.253322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.253380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.253540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.253567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.253670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.253921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.253970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.254147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.254202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.254433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.254485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.254683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.254730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.254911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.254964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.255123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.255150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.255362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.255408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.255609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.255663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.255828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.255884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.256056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.256103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.256234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.256287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.256509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.256796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.256872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.257040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.257094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.257415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.257474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.257720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.257783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.257948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.258002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.258271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.258347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.258712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.258772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.259065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.259125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.259447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.259527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.259694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.259721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.260005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.260032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.260277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.260331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.260530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.260579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.260743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.260770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.261045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.261095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.261271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.261326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.261514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.261562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.261723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.261751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.261982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.262031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.262204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.262256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.262455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.262513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.262711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.262765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.262868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.262894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.263084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.263131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.263384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.263431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.263670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.263716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.263879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.263905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.264126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.264181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.264330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.264382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.264493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.264520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.264712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.264759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.264946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.264996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.628 qpair failed and we were unable to recover it. 00:24:53.628 [2024-07-25 10:31:43.265257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.628 [2024-07-25 10:31:43.265309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.265584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.265649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.265831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.265857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.266051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.266100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.266273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.266544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.266591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.266792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.266844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.266950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.266984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.267178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.267231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.267336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.267362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.267541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.267589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.267737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.267778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.268005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.268031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.268218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.268266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.268426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.268472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.268659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.268711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.268820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.268847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.269076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.269123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.269383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.269448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.269664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.269725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.269933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.269985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.270205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.270256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.270499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.270541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.270685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.270736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.270991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.271041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.271266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.271316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.271446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.271506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.271660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.271708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.271814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.271841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.272032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.272083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.272316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.272367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.272546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.272599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.272953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.273012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.273213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.273284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.273634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.273704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.273901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.273968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.274287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.274345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.274531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.274584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.274876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.274935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.275260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.275317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.275532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.275559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.275838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.275896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.276093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.276153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.276494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.276551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.276808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.276865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.277111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.277172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.277371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.277756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.277814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.278024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.278075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.278243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.278270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.278469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.278522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.278678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.278704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.278946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.278998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.279230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.279279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.279380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.629 [2024-07-25 10:31:43.279405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.629 qpair failed and we were unable to recover it. 00:24:53.629 [2024-07-25 10:31:43.279530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.279589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.279742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.279769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.279943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.279995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.280212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.280261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.280505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.280556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.280736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.280788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.280956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.281015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.281176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.281202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.281395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.281447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.281607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.281669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.281877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.281924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.282056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.282104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.282291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.282342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.282516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.282568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.282747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.282776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.283054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.283116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.283446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.283528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.283727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.283787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.283993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.284043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.284316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.284375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.284612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.284639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.284833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.284888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.285076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.285124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.285289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.285344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.285519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.285772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.285819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.285993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.286048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.286210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.286269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.286463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.286779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.286838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.287066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.287121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.287286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.287342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.287516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.287569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.287799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.287863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.288131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.288190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.288379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.288449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.288664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.288735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.288945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.289003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.289201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.289275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.289462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.289555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.289729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.289757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.289902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.289931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.290074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.290101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.290274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.290329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.290492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.290520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.290729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.290783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.290980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.291038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.291207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.291261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.291473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.291535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.291693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.291741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.291958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.292007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.292244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.292295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.292503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.292548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.292716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.292773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.292876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.630 [2024-07-25 10:31:43.292901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.630 qpair failed and we were unable to recover it. 00:24:53.630 [2024-07-25 10:31:43.293040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.293093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.293323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.293376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.293565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.293618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.293730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.293757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.293943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.293991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.294260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.294310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.294517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.294558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.294712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.294737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.294906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.294956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.295144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.295171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.295328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.295355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.295543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.295569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.295780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.295828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.296057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.296105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.296361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.296410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.296602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.296649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.296846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.296899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.297055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.297082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.297275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.297301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.297510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.297558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.297766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.297816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.298021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.298226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.298253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.298430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.298488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.298627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.298681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.298868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.298913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.299067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.299118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.299278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.299313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.299500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.299558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.299766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.299820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.299982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.300032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.300219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.300279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.300498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.300559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.300682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.300711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.300918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.300967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.301176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.301225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.301440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.301506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.301618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.301646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.301865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.301915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.302147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.302199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.302307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.302336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.302532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.302580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.302771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.303032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.303090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.303324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.303372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.303535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.303567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.303823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.303877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.304063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.304118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.304285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.304343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.304542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.304569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.304744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.304798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.304934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.631 [2024-07-25 10:31:43.305005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.631 qpair failed and we were unable to recover it. 00:24:53.631 [2024-07-25 10:31:43.305222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.305273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.305394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.305421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.305561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.305611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.305749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.305801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.306021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.306070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.306177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.306204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.306431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.306491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.306699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.306748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.306920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.306974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.307086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.307113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.307299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.307349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.307504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.307551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.307704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.307757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.307916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.307979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.308215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.308240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.308345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.308371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.308558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.308586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.308722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.308775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.308970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.309027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.309190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.309255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.309455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.309513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.309622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.309649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.309803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.309829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.310045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.310101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.310271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.310327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.310446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.310511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.310619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.310646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.310835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.310895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.311049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.311105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.311282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.311336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.311522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.311579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.311786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.311832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.312071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.312209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.312440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.312576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.312979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.313030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.632 [2024-07-25 10:31:43.313198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.632 [2024-07-25 10:31:43.313252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.632 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.313411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.313465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.313574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.313600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.313727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.313787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.313916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.313969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.314132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.314180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.314319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.314400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.314522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.314550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.314675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.314702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.314866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.314917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.315072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.315124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.315232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.315260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.315371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.315565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.315798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.315856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.316024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.316078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.316247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.316300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.316407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.316433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.316609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.316669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.316827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.316888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.317018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.317086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.317207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.317412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.317511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.317719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.317778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.317999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.318061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.318284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.318343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.318547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.318607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.318906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.318932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.319138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.319205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.319388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.319440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.319618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.319674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.319827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.319876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.320033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.320244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.320270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.320412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.320455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.320629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.320695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.320843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.321064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.321119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.321244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.321293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.321445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.321511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.321700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.321752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.321915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.321941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.322074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.322122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.322261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.322308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.322428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.322454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.322642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.322699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.322832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.322873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.323049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.323099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.323205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.323232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.323450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.323535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.323685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.323711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.323915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.323963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.324187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.324245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.324422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.324502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.324707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.324765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.324969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.325015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.633 [2024-07-25 10:31:43.325197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.633 [2024-07-25 10:31:43.325255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.633 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.325416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.325467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.325677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.325745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.325929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.325993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.326229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.326286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.326470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.326536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.326734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.326796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.327044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.327102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.327317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.327392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.327650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.327710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.327908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.327981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.328229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.328255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.328467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.328552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.328752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.328821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.329097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.329155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.329358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.329419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.329709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.329770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.330021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.330276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.330335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.330519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.330599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.330801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.330870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.331123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.331195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.331386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.331461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.331762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.331823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.332089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.332148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.332384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.332442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.332680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.332707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.332919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.332977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.333140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.333195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.333400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.333448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.333670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.333729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.333889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.333931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.334190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.334249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.334515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.334577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.334826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.334884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.335085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.335175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.335431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.335506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.335706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.335769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.336018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.336076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.336363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.336422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.336669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.336729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.336921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.336989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.337169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.337224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.337427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.337516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.337773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.337853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.338148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.338207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.338398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.338510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.338733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.338794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.339045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.339104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.339340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.339398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.339660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.339711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.339891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.634 [2024-07-25 10:31:43.340139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.634 [2024-07-25 10:31:43.340191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.634 qpair failed and we were unable to recover it. 00:24:53.635 [2024-07-25 10:31:43.340331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.635 [2024-07-25 10:31:43.340383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.635 qpair failed and we were unable to recover it. 00:24:53.635 [2024-07-25 10:31:43.340533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.635 [2024-07-25 10:31:43.340577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.635 qpair failed and we were unable to recover it. 00:24:53.635 [2024-07-25 10:31:43.340778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.635 [2024-07-25 10:31:43.340829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.635 qpair failed and we were unable to recover it. 00:24:53.635 [2024-07-25 10:31:43.340982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.635 [2024-07-25 10:31:43.341034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.635 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.341188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.341242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.341423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.341486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.341629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.341684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.341826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.341882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.342042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.342106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.342279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.342305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.342513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.342560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.342729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.342783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.343039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.343096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.343266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.918 [2024-07-25 10:31:43.343316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.918 qpair failed and we were unable to recover it. 00:24:53.918 [2024-07-25 10:31:43.343492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.343549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.343723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.343779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.343932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.343983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.344121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.344166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.344352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.344406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.344513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.344541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.344776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.344824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.344965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.345013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.345122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.345150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.345281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.345351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.345531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.345588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.345781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.345831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.345988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.346049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.346209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.346273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.346422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.346465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.346642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.346698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.346833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.346904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.347083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.347144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.347288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.347341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.347504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.347567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.347780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.347831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.347988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.348051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.348156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.348182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.348335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.348385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.348533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.348720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.348783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.348989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.349038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.349140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.349166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.349338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.349432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.349732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.349788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.350001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.350030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.350167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.350218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.350409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.350655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.350717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.350861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.350919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.351114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.351163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.351299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.919 [2024-07-25 10:31:43.351499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.919 [2024-07-25 10:31:43.351545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.919 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.351702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.351730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.351882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.351947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.352141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.352350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.352490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.352717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.352850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.352984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.353037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.353226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.353282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.353439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.353467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.353659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.353713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.353843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.353902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.354053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.354107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.354271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.354298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.354446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.354501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.354675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.354730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.354890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.354916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.355063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.355113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.355282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.355521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.355573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.355751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.355804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.355956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.356005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.356179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.356400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.356461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.356636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.356690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.356879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.357116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.357305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.357356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.357541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.357759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.357810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.357967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.358015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.358183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.358239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.358420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.358474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.358655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.358906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.358957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.359065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.359092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.920 qpair failed and we were unable to recover it. 00:24:53.920 [2024-07-25 10:31:43.359195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.920 [2024-07-25 10:31:43.359221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.359377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.359405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.359507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.359534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.359683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.359709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.359878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.359942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.360123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.360176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.360339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.360570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.360629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.360801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.360863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.361050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.361187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.361411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.361548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.361767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.361981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.362032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.362200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.362255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.362431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.362498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.362676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.362722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.362870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.362920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.363051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.363322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.363378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.363497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.363524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.363683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.363735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.363911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.363965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.364105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.364156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.364324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.364380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.364548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.364576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.364684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.364711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.364905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.364959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.365124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.365179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.365387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.365438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.365627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.365682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.365837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.365901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.366105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.366153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.366315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.366378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.366541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.366568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.366718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.366800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.366954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.921 [2024-07-25 10:31:43.366998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.921 qpair failed and we were unable to recover it. 00:24:53.921 [2024-07-25 10:31:43.367114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.367140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.367248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.367274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.367448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.367510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.367640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.367692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.367855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.367914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.368121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.368173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.368348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.368400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.368505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.368532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.368691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.368741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.368900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.368967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.369081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.369109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.369283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.369333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.369477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.369547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.369724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.369780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.369937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.370005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.370209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.370260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.370400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.370492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.370654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.370717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.370850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.370905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.371954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.371981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.372151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.372207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.372340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.372392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.372521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.372564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.372719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.372767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.372904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.373132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.373188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.373324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.373374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.373534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.373562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.373719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.373746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.373913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.373969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.374107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.374164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.374342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.374395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.374549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.922 [2024-07-25 10:31:43.374598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.922 qpair failed and we were unable to recover it. 00:24:53.922 [2024-07-25 10:31:43.374737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.374797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.375001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.375048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.375224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.375275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.375380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.375413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.375550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.375599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.375780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.376008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.376063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.376259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.376313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.376459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.376509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.376666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.376714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.376846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.376894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.377069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.377124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.377296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.377349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.377520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.377577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.377709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.377756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.377889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.377973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.378107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.378337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.378395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.378502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.378529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.378717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.378769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.378968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.379170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.379330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.379489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.379710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.379868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.379895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.380025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.380093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.380288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.380343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.380477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.380564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.380732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.380794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.380972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.923 [2024-07-25 10:31:43.381024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.923 qpair failed and we were unable to recover it. 00:24:53.923 [2024-07-25 10:31:43.381176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.381225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.381338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.381366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.381470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.381503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.381699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.381753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.381898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.381945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.382076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.382125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.382313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.382361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.382527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.382555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.382742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.382794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.382957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.383119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.383297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.383514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.383685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.383867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.383931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.384069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.384125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.384294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.384348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.384565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.384615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.384744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.384798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.384971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.385021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.385191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.385241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.385378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.385427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.385553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.385609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.385751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.385804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.385979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.386029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.386186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.386244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.386464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.386635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.386693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.386800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.386827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.386993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.387048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.387209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.387236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.387386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.387434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.387614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.387668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.387821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.387852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.387992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.388041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.388196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.388224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.388346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.388401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.924 [2024-07-25 10:31:43.388577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.924 [2024-07-25 10:31:43.388629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.924 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.388846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.388895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.389067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.389275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.389301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.389467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.389527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.389662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.389710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.389853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.389883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.390862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.390888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.391047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.391102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.391260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.391310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.391529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.391556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.391659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.391686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.391887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.391936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.392102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.392152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.392297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.392453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.392488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.392662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.392711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.392815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.392841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.393000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.393053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.393190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.393232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.393446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.393506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.393639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.393687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.393865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.393920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.394081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.394107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.394267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.394328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.394464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.394525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.394716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.394772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.394953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.395106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.395278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.395430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.395615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.395887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.395940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.396068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.925 [2024-07-25 10:31:43.396116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.925 qpair failed and we were unable to recover it. 00:24:53.925 [2024-07-25 10:31:43.396273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.396327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.396501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.396557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.396728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.396787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.396899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.396925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.397061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.397109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.397285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.397336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.397470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.397519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.397668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.397724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.397884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.397945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.398102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.398163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.398299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.398349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.398509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.398565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.398704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.398757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.398920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.398970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.399099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.399150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.399275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.399328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.399492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.399545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.399778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.399832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.400258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.400438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.400693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.400826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.400977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.401166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.401370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.401552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.401723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.401928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.401982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.402163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.402215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.402383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.402437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.402599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.402651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.402850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.402903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.403091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.403226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.403360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.403624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.926 [2024-07-25 10:31:43.403841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.926 qpair failed and we were unable to recover it. 00:24:53.926 [2024-07-25 10:31:43.403978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.404058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.404223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.404279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.404387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.404413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.404564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.404614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.404795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.404854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.404991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.405052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.405233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.405290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.405439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.405466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.405600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.405652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.405779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.405831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.406055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.406276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.406473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.406614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.406828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.406978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.407028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.407203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.407248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.407405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.407431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.407580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.407629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.407782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.407815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.407960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.408003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.408131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.408191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.408338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.408387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.408544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.408605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.408769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.408826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.408961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.409012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.409169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.409217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.409353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.409394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.409604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.409653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.409830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.409878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.410013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.410060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.410202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.410250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.410393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.410442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.410662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.927 [2024-07-25 10:31:43.410714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.927 qpair failed and we were unable to recover it. 00:24:53.927 [2024-07-25 10:31:43.410821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.410847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.410981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.411029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.411204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.411259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.411420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.411478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.411655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.411712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.411847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.411895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.412025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.412074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.412197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.412256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.412401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.412467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.412669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.412719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.412868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.412933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.413084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.413147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.413345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.413399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.413509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.413537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.413695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.413755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.413891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.413939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.414072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.414129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.414255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.414304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.414447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.414510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.414690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.414744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.414879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.414921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.415895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.415943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.416162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.416220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.416373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.416424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.416611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.416669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.416835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.416891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.416998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.417024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.417187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.417246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.417389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.417438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.417554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.417583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.417789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.928 [2024-07-25 10:31:43.417844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.928 qpair failed and we were unable to recover it. 00:24:53.928 [2024-07-25 10:31:43.417967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.418016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.418192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.418247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.418420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.418477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.418603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.418630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.418758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.418806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.418962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.419020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.419170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.419233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.419359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.419407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.419539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.419595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.419751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.419802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.419975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.420033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.420193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.420220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.420350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.420397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.420539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.420588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.420747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.420808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.420962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.421029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.421185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.421250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.421412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.421471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.421648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.421705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.421860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.421887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.422971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.422997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.423145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.423193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.423323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.423371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.423523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.423550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.423677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.423727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.423894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.423946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.424118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.424172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.424275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.424301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.424405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.424432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.424610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.424667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.424827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.424880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.425019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.425068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.929 [2024-07-25 10:31:43.425235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.929 [2024-07-25 10:31:43.425294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.929 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.425464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.425533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.425703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.425751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.425881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.425929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.426081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.426108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.426274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.426330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.426500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.426534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.426711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.426770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.426903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.426953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.427110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.427169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.427337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.427392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.427519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.427574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.427724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.427790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.427947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.427975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.428926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.428981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.429138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.429193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.429341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.429546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.429598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.429748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.429804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.430913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.430972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.431176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.431230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.431375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.431565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.431747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.431800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.432030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.432079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.432214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.432268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.432460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.432523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.930 [2024-07-25 10:31:43.432713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.930 [2024-07-25 10:31:43.432767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.930 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.432925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.432972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.433124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.433151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.433309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.433370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.433525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.433553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.433698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.433746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.433898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.433961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.434174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.434227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.434336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.434364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.434564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.434598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.434749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.434775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.434922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.434977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.435099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.435155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.435357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.435410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.435535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.435593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.435761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.435816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.435953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.435999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.436192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.436242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.436385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.436434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.436616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.436665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.436883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.436909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.437040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.437087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.437250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.437307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.437502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.437555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.437763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.437813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.438062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.438245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.438385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.438616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.438793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.438951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.439004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.439161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.439221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.439362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.439413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.439528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.439557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.439737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.439795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.439955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.440014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.440137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.440189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.440364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.440422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.931 [2024-07-25 10:31:43.440563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.931 [2024-07-25 10:31:43.440651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.931 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.440809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.440872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.441024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.441052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.441204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.441258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.441418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.441474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.441630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.441695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.441890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.441941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.442136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.442186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.442346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.442403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.442519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.442548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.442760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.442814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.443013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.443062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.443266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.443312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.443454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.443508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.443695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.443723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.443877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.443929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.444093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.444151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.444316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.444374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.444544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.444597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.444812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.444861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.444985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.445034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.445164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.445219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.445359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.445440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.445657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.445705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.445864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.445922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.446071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.446128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.446339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.446387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.446618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.446675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.446844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.446898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.447052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.447096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.447235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.447321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.447520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.447549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.447767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.447817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.932 [2024-07-25 10:31:43.448024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.932 [2024-07-25 10:31:43.448074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.932 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.448269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.448324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.448546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.448573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.448680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.448707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.448839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.448922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.449029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.449055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.449319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.449368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.449548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.449780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.449831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.450058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.450108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.450399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.450458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.450674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.450752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.450997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.451025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.451244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.451294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.451453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.451511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.451715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.451763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.451938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.451986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.452144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.452205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.452416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.452465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.452607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.452669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.452877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.452926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.453092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.453146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.453319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.453372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.453592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.453649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.453817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.453874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.454108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.454158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.454335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.454383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.454559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.454607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.454801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.454849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.455008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.455068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.455220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.455247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.455476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.455510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.455661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.455729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.455935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.455988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.456149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.456199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.456343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.456392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.456562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.456592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.456884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.933 [2024-07-25 10:31:43.456934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.933 qpair failed and we were unable to recover it. 00:24:53.933 [2024-07-25 10:31:43.457224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.457273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.457448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.457519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.457728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.457780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.457989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.458038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.458257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.458311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.458444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.458532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.458737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.458980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.459243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.459293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.459551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.459577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.459720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.459762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.459944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.460003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.460146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.460228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.460425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.460478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.460695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.460743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.460945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.460995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.461200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.461249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.461388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.461445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.461662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.461712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.461920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.461969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.462102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.462157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.462391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.462446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.462650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.462709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.462884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.462933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.463051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.463081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.463282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.463332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.463476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.463544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.463677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.463724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.464000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.464047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.464250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.464297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.464400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.464428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.464625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.464674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.464909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.464959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.465169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.465218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.465366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.465413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.465608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.465638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.465847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.465893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.466093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.934 [2024-07-25 10:31:43.466141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.934 qpair failed and we were unable to recover it. 00:24:53.934 [2024-07-25 10:31:43.466388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.466436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.466659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.466715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.466922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.466971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.467143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.467200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.467370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.467422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.467580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.467626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.467791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.467846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.468054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.468104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.468300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.468351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.468570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.468622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.468782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.468843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.468954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.468981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.469158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.469319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.469345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.469518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.469546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.469655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.469683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.469884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.469935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.470124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.470174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.470284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.470312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.470505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.470550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.470714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.470772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.471041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.471089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.471216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.471277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.471508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.471556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.471784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.471832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.472068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.472117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.472285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.472339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.472526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.472553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.472660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.472687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.472975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.473045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.473321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.473384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.473621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.473994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.474056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.474366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.474426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.474635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.474691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.474874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.474922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.475069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.475123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.475369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.935 [2024-07-25 10:31:43.475434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.935 qpair failed and we were unable to recover it. 00:24:53.935 [2024-07-25 10:31:43.475724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.475775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.475909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.476197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.476249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.476360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.476387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.476588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.476639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.476854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.476906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.477131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.477180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.477393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.477442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.477585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.477635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.477815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.477869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.478063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.478321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.478372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.478528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.478605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.478800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.478853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.479061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.479131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.479417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.479476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.479728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.479780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.479953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.480009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.480216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.480263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.480425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.480491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.480699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.480751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.480884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.480940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.481108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.481162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.481263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.481510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.481557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.481739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.481796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.481909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.481935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.482128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.482181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.482378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.482442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.482721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.482782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.483018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.483044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.483280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.483350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.483521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.483580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.483865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.483923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.484200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.484261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.484494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.484548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.484748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.484818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.936 [2024-07-25 10:31:43.485099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.936 [2024-07-25 10:31:43.485156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.936 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.485355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.485417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.485604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.485639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.485807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.485866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.486050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.486100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.486270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.486325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.486434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.486462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.486681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.486732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.486886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.486943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.487166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.487218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.487398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.487453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.487630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.487835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.487916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.488124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.488175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.488339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.488398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.488589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.488618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.488853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.488900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.489098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.489307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.489357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.489497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.489541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.489752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.489953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.490000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.490213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.490266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.490472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.490705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.490762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.490895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.490948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.491137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.491190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.491399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.491450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.491667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.491715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.491881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.491939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.492043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.492070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.492289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.492343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.492541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.937 [2024-07-25 10:31:43.492568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.937 qpair failed and we were unable to recover it. 00:24:53.937 [2024-07-25 10:31:43.492771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.492820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.492963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.493019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.493203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.493254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.493460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.493516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.493623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.493650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.493826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.493880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.494046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.494073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.494190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.494216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.494372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.494431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.494663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.494693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.494864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.494921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.495069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.495139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.495364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.495413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.495525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.495553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.495733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.495793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.495963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.496012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.496165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.496192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.496419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.496472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.496731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.496792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.497044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.497106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.497315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.497378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.497634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.497698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.497984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.498042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.498304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.498358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.498519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.498569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.498764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.498817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.498987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.499040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.499145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.499171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.499326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.499388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.499524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.499581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.499741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.499800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.500005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.500053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.500224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.500278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.500402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.500459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.500621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.500651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.500817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.500873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.501066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.501386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.938 [2024-07-25 10:31:43.501434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.938 qpair failed and we were unable to recover it. 00:24:53.938 [2024-07-25 10:31:43.501643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.501700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.501823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.501887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.502099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.502148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.502291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.502509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.502536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.502688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.502714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.502921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.502968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.503181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.503237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.503586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.503800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.503850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.504003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.504053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.504278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.504331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.504545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.504594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.504741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.504788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.504995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.505147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.505339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.505555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.505715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.505923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.505973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.506142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.506200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.506298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.506324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.506529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.506555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.506835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.506885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.507093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.507142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.507288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.507373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.507500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.507530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.507759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.507813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.507995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.508045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.508258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.508284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.508505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.508556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.508726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.508769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.508979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.509005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.509204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.509276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.509477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.509567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.509748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.509802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.509999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.510048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.939 [2024-07-25 10:31:43.510262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.939 [2024-07-25 10:31:43.510312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.939 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.510473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.510509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.510647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.510696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.510853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.510879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.511163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.511213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.511375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.511429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.511632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.511695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.511920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.511974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.512141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.512194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.512467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.512523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.512653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.512701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.512864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.512919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.513086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.513140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.513325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.513375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.513539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.513569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.513701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.513767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.513904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.513952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.514177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.514231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.514421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.514447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.514656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.514710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.514817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.514844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.515061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.515116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.515287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.515343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.515562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.515613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.515792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.515849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.516077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.516125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.516319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.516365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.516639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.516700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.516951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.517013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.517295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.517351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.517567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.517614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.517823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.517871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.518057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.518110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.518279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.518332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.518546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.518573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.518745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.518797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.518973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.519025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.519231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.940 [2024-07-25 10:31:43.519282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.940 qpair failed and we were unable to recover it. 00:24:53.940 [2024-07-25 10:31:43.519441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.519508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.519614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.519640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.519847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.519897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.520052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.520079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.520243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.520314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.520470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.520504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.520677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.520731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.520921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.520948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.521081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.521140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.521325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.521391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.521663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.521726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.521924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.521997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.522341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.522667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.522727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.522948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.523009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.523208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.523279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.523619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.523680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.523911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.523971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.524191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.524252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.524527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.524554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.524816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.524875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.525071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.525146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.525426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.525496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.525749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.525808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.526007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.526053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.526194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.526253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.526520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.526760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.526816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.527023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.527196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.527251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.527454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.527512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.527665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.527718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.527927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.941 [2024-07-25 10:31:43.527990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.941 qpair failed and we were unable to recover it. 00:24:53.941 [2024-07-25 10:31:43.528197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.528239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.528466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.528742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.528800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.528999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.529060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.529260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.529334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.529660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.529722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.530002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.530061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.530306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.530364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.530685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.530745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.531028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.531085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.531355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.531413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.531711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.531760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.532039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.532097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.532288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.532359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.532676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.532736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.533049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.533322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.533381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.533592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.533640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.533745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.533771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.533925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.533952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.534082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.534136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.534344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.534393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.534565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.534721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.534746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.534897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.534951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.535167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.535220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.535407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.535459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.535628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.535677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.535907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.535961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.536131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.536186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.536383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.536535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.536562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.536732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.536782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.536987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.537036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.537186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.537212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.537426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.537492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.537703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.537729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.942 qpair failed and we were unable to recover it. 00:24:53.942 [2024-07-25 10:31:43.537895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.942 [2024-07-25 10:31:43.537949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.538095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.538153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.538324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.538377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.538612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.538664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.538847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.538901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.539111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.539158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.539261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.539286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.539458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.539521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.539704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.539761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.539941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.540007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.540279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.540338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.540590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.540641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.540820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.540873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.541040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.541067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.541170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.541198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.541356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.541405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.541564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.541622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.541892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.541956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.542291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.542354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.542558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.542620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.542866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.542925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.543207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.543268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.543465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.543553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.543723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.543789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.544047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.544105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.544395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.544453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.544683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.544736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.544934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.544961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.545123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.545173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.545350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.545402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.545607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.545655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.545931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.545980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.546185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.546238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.546412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.546460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.546709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.546760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.546916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.546942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.547120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.547173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.547337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.943 [2024-07-25 10:31:43.547391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.943 qpair failed and we were unable to recover it. 00:24:53.943 [2024-07-25 10:31:43.547562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.547617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.547749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.547775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.547918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.547969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.548165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.548217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.548407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.548433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.548633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.548691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.548793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.548820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.548992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.549047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.549234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.549285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.549458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.549522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.549728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.549756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.549912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.549939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.550094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.550143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.550321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.550372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.550539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.550567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.550744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.550805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.551035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.551084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.551297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.551348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.551532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.551756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.551818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.552028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.552077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.552310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.552371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.552499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.552554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.552728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.552756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.552939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.552997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.553109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.553135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.553418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.553472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.553641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.553669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.553880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.553929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.554113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.554139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.554313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.554369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.554576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.554630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.554877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.554929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.555087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.555114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.555305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.555362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.555544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.555602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.555778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.555832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.944 [2024-07-25 10:31:43.555940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.944 [2024-07-25 10:31:43.555967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.944 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.556135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.556192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.556356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.556382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.556542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.556592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.556702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.556732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.556931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.557005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.557263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.557316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.557548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.557604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.557785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.557839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.558085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.558264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.558493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.558731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.558880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.558993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.559020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.559213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.559267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.559442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.559506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.559686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.559737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.559891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.559941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.560121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.560177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.560348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.560405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.560527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.560554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.560657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.560683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.560864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.560918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.561103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.561155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.561366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.561415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.561559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.561618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.561824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.561901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.562106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.562204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.562469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.562831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.562891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.563059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.563115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.563310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.563363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.563521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.563580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.563744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.563799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.563954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.564014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.564119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.564148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.564328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.564383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.564542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.945 [2024-07-25 10:31:43.564592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.945 qpair failed and we were unable to recover it. 00:24:53.945 [2024-07-25 10:31:43.564744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.564770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.564920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.564985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.565113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.565169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.565344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.565397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.565586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.565643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.565816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.565869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.566094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.566303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.566494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.566683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.566814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.566958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.567008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.567191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.567250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.567354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.567380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.567534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.567576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.567762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.567812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.567980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.568040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.568229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.568283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.568448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.568512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.568678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.568733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.568871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.568924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.569065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.569114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.569290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.569343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.569510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.569538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.569776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.569837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.570031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.570083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.570240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.570292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.570452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.570526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.570711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.570763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.570913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.571116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.571180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.571353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.571406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.571579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.946 [2024-07-25 10:31:43.571639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.946 qpair failed and we were unable to recover it. 00:24:53.946 [2024-07-25 10:31:43.571788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.571830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.572000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.572061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.572223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.572256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.572426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.572528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.572752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.572813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.573005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.573074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.573354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.573413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.573588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.573646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.573848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.573889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.574072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.574141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.574329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.574395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.574579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.574639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.574926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.574985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.575215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.575272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.575459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.575544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.575796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.575856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.576018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.576069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.576294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.576352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.576563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.576591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.576752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.576812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.576975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.577033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.577223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.577277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.577439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.577468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.577619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.577663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.577846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.577904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.578076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.578206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.578232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.578393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.578451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.578628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.578676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.578819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.578904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.579071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.579127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.579316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.579371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.579476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.579515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.579693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.579745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.579922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.579974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.580149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.580203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.580340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.580395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.947 [2024-07-25 10:31:43.580559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.947 [2024-07-25 10:31:43.580589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.947 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.580767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.580820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.581001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.581062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.581206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.581265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.581387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.581415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.581597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.581656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.581860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.581911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.582097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.582153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.582311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.582380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.582538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.582565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.582732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.582784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.582892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.582918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.583077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.583105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.583231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.583283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.583399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.583427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.583568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.583613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.583858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.583901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.584056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.584083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.584247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.584303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.584475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.584548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.584701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.584753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.584963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.585017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.585204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.585254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.585396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.585489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.585614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.585640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.585786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.585813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.585995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.586046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.586208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.586268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.586435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.586497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.586659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.586718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.586872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.586899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.587846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.587900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.588056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.588084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.948 qpair failed and we were unable to recover it. 00:24:53.948 [2024-07-25 10:31:43.588242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.948 [2024-07-25 10:31:43.588268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.588422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.588498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.588623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.588649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.588768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.588794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.588893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.588919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.589866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.589919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.590941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.590969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.591109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.591413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.591615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.591955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.592008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.592143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.592186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.592367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.592425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.592546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.592573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.592733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.592784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.592953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.593008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.593142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.593196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.593356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.593415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.593568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.593621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.593796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.593853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.594014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.594073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.594232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.594289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.594475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.594561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.594820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.594905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.595129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.595182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.949 [2024-07-25 10:31:43.595341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.949 [2024-07-25 10:31:43.595397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.949 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.595560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.595588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.595694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.595720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.595882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.595941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.596138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.596193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.596352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.596407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.596533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.596588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.596746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.596806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.597006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.597060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.597216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.597269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.597410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.597454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.597624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.597677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.597844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.597900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.598025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.598076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.598255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.598304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.598536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.598564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.598725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.598784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.598964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.599023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.599190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.599244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.599399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.599447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.599613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.599640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.599801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.599855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.600028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.600085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.600290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.600341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.600516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.600547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.600776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.600831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.600947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.600972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.601127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.601170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.601328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.601359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.601464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.601498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.601674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.601729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.601926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.601977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.602143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.602198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.602371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.602428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.602587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.602653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.602792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.602877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.603033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.603061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.603180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.603246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.603365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.603391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.603561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.950 [2024-07-25 10:31:43.603613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.950 qpair failed and we were unable to recover it. 00:24:53.950 [2024-07-25 10:31:43.603829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.603879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.604056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.604112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.604274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.604332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.604524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.604574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.604764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.604819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.604953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.605037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.605213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.605266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.605417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.605462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.605643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.605704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.605837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.605896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.606805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.606859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.607001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.607059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.607216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.607243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.607403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.607458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.607619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.607682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.607810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.607861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.608036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.608089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.608255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.608311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.608523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.608552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.608706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.608776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.608965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.609021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.609166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.609219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.609381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.609441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.609609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.609670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.609825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.609892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.610023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.610071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.610211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.610426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.610496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.610647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.610694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.610846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.610872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.611005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.951 [2024-07-25 10:31:43.611053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.951 qpair failed and we were unable to recover it. 00:24:53.951 [2024-07-25 10:31:43.611205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.611253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.611430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.611486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.611698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.611748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.611914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.611974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.612149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.612206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.612401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.612462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.612688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.612742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.612911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.612962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.613073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.613100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.613266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.613326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.613477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.613549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.613688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.613734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.613906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.613961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.614141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.614196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.614348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.614414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.614583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.614640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.614778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.614858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.615010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.615075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.615217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.615267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.615418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.615444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.615628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.615684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.615840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.615868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.616000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.616049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.616191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.616261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.616448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.616511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.616668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.616731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.616888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.616945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.617096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.617158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.617282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.617331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.617512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.617559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.617708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.617734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.617886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.617940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.618079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.618126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.618262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.618316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.618477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.618543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.618735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.618788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.618961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.619023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.619230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.619281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.619452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.619512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.952 [2024-07-25 10:31:43.619670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.952 [2024-07-25 10:31:43.619731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.952 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.619883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.619947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.620107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.620164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.620338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.620390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.620572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.620622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.620759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.620802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.620957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.621189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.621394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.621559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.621766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.621929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.621977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.622155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.622215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.622372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.622400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.622518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.622545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.622681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.622727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.622879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.622948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.623099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.623164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.623328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.623383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.623539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.623567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.623672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.623698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.623852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.623917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.624161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.624295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.624423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.624622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.624816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.624972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.625032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.625212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.625265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.625420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.625448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.625628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.625680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.625842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.626073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.626126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.626259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.626311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.626431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.626457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.626630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.626689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.626836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.626887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.627042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.627275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.627330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.627469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.627523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.627658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.627713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.627861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.627913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.628082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.953 [2024-07-25 10:31:43.628136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.953 qpair failed and we were unable to recover it. 00:24:53.953 [2024-07-25 10:31:43.628346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.628418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.628630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.628845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.628903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.629042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.629090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.629253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.629309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.629443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.629522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.629703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.629760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.629934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.629985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.630089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.630116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.630251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.630304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.630454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.630525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.630701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.630750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.630947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.630998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.631142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.631202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.631375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.631432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.631571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.631626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.631805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.631855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.632928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.632986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.633135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.633186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.633344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.633406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.633526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.633554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.633716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.633940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.633997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.634178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.634226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.634415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.634468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.634646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.634704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.634832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.634887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.635034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.635081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.635239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.635303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.635468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.635695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.635721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.635843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.635895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.636028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.636110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.636272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.636330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.636502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.636558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.636723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.636784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.636959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.637008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.637167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.637223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.637388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.954 [2024-07-25 10:31:43.637433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.954 qpair failed and we were unable to recover it. 00:24:53.954 [2024-07-25 10:31:43.637578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.637629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.637819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.637873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.638090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.638117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.638309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.638359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.638518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.638546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.638743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.638803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.638970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.639027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.639221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.639272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.639421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.639467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.639647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.639704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.639868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.639927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.640071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.640119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.640292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.640346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.640490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.640545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.640656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.640683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.640854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.640904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.641015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.641042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.641258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.641306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.641509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.641552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.641749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.641802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.641935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.641983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.642145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.642174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.642366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.642420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.642608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.642694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.642991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.643044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.643172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.643225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.643363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.643416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.643621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.643671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.643835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.643889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.644023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.644067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.644245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.644292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.644454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.644518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.644687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.644738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.644878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.644943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.645122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.645175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.645356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.645404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.645535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.645755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.645813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.645955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.955 [2024-07-25 10:31:43.646009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.955 qpair failed and we were unable to recover it. 00:24:53.955 [2024-07-25 10:31:43.646179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.646233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.646343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.646369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.646538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.646597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.646760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.646819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.646923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.646949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.647113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.647167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.647327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.647385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.647495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.647522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.647717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.647769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.647957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.647985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.648092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.648120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.648299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.648352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.648527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.648556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.648703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.648758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.648922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.648977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.649168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.649219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.649372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.649397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.649583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.649632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.649801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.649854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.650015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.650068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.650277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.650325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.650531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.650578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.650739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.650768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.650967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.651016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.651185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.651241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.651399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.651456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.651621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.651651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.651759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.651787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.651968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.652019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.652145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.652198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.652384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.652434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.652628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.652690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.652865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.652919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.653115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.653168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.653363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.653414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.653550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.653599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.653761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.653814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.653980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.654190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.654216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.654391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.654440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.654612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.654669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.654871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.654919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.654956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb190 (9): Bad file descriptor 00:24:53.956 [2024-07-25 10:31:43.655135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.655194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.655400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.956 [2024-07-25 10:31:43.655454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.956 qpair failed and we were unable to recover it. 00:24:53.956 [2024-07-25 10:31:43.655669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.655720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.655951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.655989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.656193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.656248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.656403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.656431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.656566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.656617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.656779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.656833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.656971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.657019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.657213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.657264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.657494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.657538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.657713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.657762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.657957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.658008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.658223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.658277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.658437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.658501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.658653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.658702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.658843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.658886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.659043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.659103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.659281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.659333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.659493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.659524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.659674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.659735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.659865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.659916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.660066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.660119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.660292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.660340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.660524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.660567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.660723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.660771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.660960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.661167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.661378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.661561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.661792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.661955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.661983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.662159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.662209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.662380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.662430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.662565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.662629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.662755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.662789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.662902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.662929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.663110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.663155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.663323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.663374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.663539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.663567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.663743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.663794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.663942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.663992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.664191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.664245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.664400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.664459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.664657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.664684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.664843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.957 [2024-07-25 10:31:43.664902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.957 qpair failed and we were unable to recover it. 00:24:53.957 [2024-07-25 10:31:43.665010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.665037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.665156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.665183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.665351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.665405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.665550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.665609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.665765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.665820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.665978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.666030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.666190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.666245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.666380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.666440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.666638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.666691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.666845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.666904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.667062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.667117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.667279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.667335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.667470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.667532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.667714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.667771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.667937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.667998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.668158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.668187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.668294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.668325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.668490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.668517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.668664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.668708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.668836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.669035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.669081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.669213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.669290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.669458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.669516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.669635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.669664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.669820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.669873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.670027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.670053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.670207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.670251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.670417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.670469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.670631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.670657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.670895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.671074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.671127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.671234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.671260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.671414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.671441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.671648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.671725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.671922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.671950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.672109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.672169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.672356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.672416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.672641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.672684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:53.958 [2024-07-25 10:31:43.672898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.958 [2024-07-25 10:31:43.672957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:53.958 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.673166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.673226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.673413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.673478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.673700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.673762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.673921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.673977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.674183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.674226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.674404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.674457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.674641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.674702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.674859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.674886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.675092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.675142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.675304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.675363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.675468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.675509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.675659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.675714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.675889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.675937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.676107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.676136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.676346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.676414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.676665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.676695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.676828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.676877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.677033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.677095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.677224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.677279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.677431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.677506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.677717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.677769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.677901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.677970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.678143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.678201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.678371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.678424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.678572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.678654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.678798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.238 [2024-07-25 10:31:43.678848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.238 qpair failed and we were unable to recover it. 00:24:54.238 [2024-07-25 10:31:43.679001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.679027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.679166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.679248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.679399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.679459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.679608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.679664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.679844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.679896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.680072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.680127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.680278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.680341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.680447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.680472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.680685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.680732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.680953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.681003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.681171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.681226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.681370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.681426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.681567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.681614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.681819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.681873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.682023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.682082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.682283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.682336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.682552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.682621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.682859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.682911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.683076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.683137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.683271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.683322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.683478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.683550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.683706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.683770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.683878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.683905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.684042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.684095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.684227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.684308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.684431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.684488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.684707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.684757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.684893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.684946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.685098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.685157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.685265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.685293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.685503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.685548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.685746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.685793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.685954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.685981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.686086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.686116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.686253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.686306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.686465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.686529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.686734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.686785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.239 [2024-07-25 10:31:43.686940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.239 [2024-07-25 10:31:43.687002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.239 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.687171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.687228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.687357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.687419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.687582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.687773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.687854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.688031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.688084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.688241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.688304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.688454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.688510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.688666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.688716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.688913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.688995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.689143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.689197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.689353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.689381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.689592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.689642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.689774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.689827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.689965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.690014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.690167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.690194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.690389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.690440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.690606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.690663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.690773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.690800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.691007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.691059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.691216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.691244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.691390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.691666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.691695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.691905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.691954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.692111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.692170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.692427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.692478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.692592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.692619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.692724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.692750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.692933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.692959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.693103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.693156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.693350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.693416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.693633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.693693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.693877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.693925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.694265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.694323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.694505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.694556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.694759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.694830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.695100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.695176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.240 qpair failed and we were unable to recover it. 00:24:54.240 [2024-07-25 10:31:43.695365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.240 [2024-07-25 10:31:43.695434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.695600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.695669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.695857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.696200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.696259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.696442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.696525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.696715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.696786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.697007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.697068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.697342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.697401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.697598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.697643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.697877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.697936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.698235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.698293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.698555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.698608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.698810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.698863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.699012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.699077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.699207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.699261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.699440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.699494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.699640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.699708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.699871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.699925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.700084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.700145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.700335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.700387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.700543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.700570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.700683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.700709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.700902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.700954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.701102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.701168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.701331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.701397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.701515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.701543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.701652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.701680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.701830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.701880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.702055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.702104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.702294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.702349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.702558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.702585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.702735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.702777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.702909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.702967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.703122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.703181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.703336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.703385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.703495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.703522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.703803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.703849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.241 [2024-07-25 10:31:43.704033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.241 [2024-07-25 10:31:43.704083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.241 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.704227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.704276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.704527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.704554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.704678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.704732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.704882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.704933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.705173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.705223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.705433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.705625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.705673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.705874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.705924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.706029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.706292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.706357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.706553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.706626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.706786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.706853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.707126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.707184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.707455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.707517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.707675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.707724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.707824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.707851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.707948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.707974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.708082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.708109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.708248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.708274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.708414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.708442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.708588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.708642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.708914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.708999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.709245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.709294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.709423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.709472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.709692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.709742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.709845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.709871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.710002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.710059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.710191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.710240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.710445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.710502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.710651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.710678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.710884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.242 [2024-07-25 10:31:43.710932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.242 qpair failed and we were unable to recover it. 00:24:54.242 [2024-07-25 10:31:43.711072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.711127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.711323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.711351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.711541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.711592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.711698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.711725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.711830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.711858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.712095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.712160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.712354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.712414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.712621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.712671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.712783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.712811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.712927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.712954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.713089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.713139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.713293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.713351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.713501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.713553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.713750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.713776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.713994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.714048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.714269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.714295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.714442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.714487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.714597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.714624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.714842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.714890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.715026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.715074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.715231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.715291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.715516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.715543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.715652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.715678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.715868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.715915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.716104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.716171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.716528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.716555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.716806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.717166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.717225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.717533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.717560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.717878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.717935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.718139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.718208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.718396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.718440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.718609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.718672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.718968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.718994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.243 [2024-07-25 10:31:43.719254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.243 [2024-07-25 10:31:43.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.243 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.719515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.719593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.719781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.719842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.720209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.720449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.720476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.720712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.720770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.720945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.720999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.721251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.721309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.721494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.721556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.721717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.721766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.721985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.722056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.722242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.722313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.722508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.722579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.722765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.722809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.723067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.723124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.723412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.723470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.723795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.723853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.724176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.724247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.724436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.724520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.724801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.724859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.725117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.725175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.725370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.725442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.725663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.725747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.725857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.725884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.726091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.726139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.726306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.726360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.726510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.726555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.726677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.726704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.726882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.726944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.727118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.727181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.727428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.727495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.727678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.727743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.728021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.728081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.728333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.728401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.728684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.728748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.729017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.729090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.729365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.729424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.244 [2024-07-25 10:31:43.729614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.244 [2024-07-25 10:31:43.729667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.244 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.729809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.729863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.730079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.730133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.730289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.730317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.730538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.730572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.730719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.730764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.730943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.731020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.731189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.731435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.731515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.731803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.731862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.732146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.732203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.732367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.732436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.732749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.732808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.733001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.733071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.733249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.733310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.733499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.733808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.733867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.734056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.734125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.734325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.734390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.734658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.734717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.734874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.734926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.735236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.735305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.735589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.735648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.735931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.735991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.736237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.736287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.736449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.736477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.736654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.736713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.736892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.736941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.737168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.737222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.737429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.737488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.737615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.737644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.737847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.737898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.738097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.738125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.738279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.738306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.738542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.738569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.738815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.738866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.739115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.739165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.245 qpair failed and we were unable to recover it. 00:24:54.245 [2024-07-25 10:31:43.739361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.245 [2024-07-25 10:31:43.739415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.739608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.739660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.739889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.739958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.740179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.740242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.740531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.740558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.740725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.740769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.741052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.741113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.741394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.741466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.741785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.741811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.742090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.742116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.742332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.742400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.742958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.743019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.743211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.743279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.743570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.743629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.743813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.743880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.744166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.744225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.744507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.744570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.744843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.744905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.745090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.745157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.745347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.745418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.745743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.745803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.745981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.746033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.746225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.746291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.746474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.746530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.746793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.746851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.747031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.747091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.747413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.747471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.747774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.747836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.747998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.748051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.748241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.748304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.748511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.748537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.748792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.748852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.749181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.749249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.749433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.749535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.749690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.749716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.749950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.750016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.750212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.750274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.246 [2024-07-25 10:31:43.750466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.246 [2024-07-25 10:31:43.750553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.246 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.750746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.750817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.751143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.751204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.751524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.751584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.751863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.751921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.752094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.752149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.752400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.752458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.752636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.752693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.752978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.753035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.753318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.753376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.753597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.753661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.753999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.754058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.754224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.754282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.754605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.754667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.754949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.755007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.755292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.755350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.755533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.755593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.755771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.755825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.756014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.756082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.756369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.756431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.756604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.756668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.756991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.757052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.757336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.757395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.757614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.757640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.757824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.757869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.758041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.758100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.758399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.758425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.758652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.758689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.758844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.758895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.759117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.759165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.759438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.759489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.247 qpair failed and we were unable to recover it. 00:24:54.247 [2024-07-25 10:31:43.759692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.247 [2024-07-25 10:31:43.759740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.759968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.760017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.760210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.760261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.760540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.760568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.760758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.760810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.760942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.760999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.761287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.761345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.761537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.761564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.761758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.761783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.761943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.761971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.762183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.762234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.762376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.762429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.762598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.762646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.762836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.762863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.763022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.763075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.763233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.763292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.763494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.763538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.763736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.763789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.764052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.764101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.764255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.764282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.764508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.764541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.764744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.764798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.765046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.765101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.765289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.765315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.765559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.765587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.765789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.765838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.766042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.766091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.766216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.766265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.766474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.766531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.766807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.766853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.767108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.767154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.767358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.767384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.767531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.767575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.767789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.767836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.768076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.768123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.768376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.768427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.768628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.768679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.248 [2024-07-25 10:31:43.768781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.248 [2024-07-25 10:31:43.768807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.248 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.768961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.769021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.769242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.769292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.769473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.769514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.769650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.769698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.769873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.769938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.770154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.770205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.770407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.770455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.770649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.770698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.770948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.771001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.771270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.771318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.771468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.771501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.771690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.771715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.772004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.772070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.772365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.772424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.772784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.772837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.773057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.773103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.773214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.773242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.773513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.773566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.773712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.773775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.773917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.774000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.774194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.774245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.774448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.774504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.774703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.774752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.775033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.775079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.775225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.775273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.775505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.775558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.775772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.775820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.776013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.776042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.776199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.776245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.776406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.776464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.776587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.776615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.776849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.776900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.777044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.777087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.777278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.777305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.777442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.777504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.777696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.777762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.778030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.249 [2024-07-25 10:31:43.778082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.249 qpair failed and we were unable to recover it. 00:24:54.249 [2024-07-25 10:31:43.778298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.778346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.778527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.778576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.778700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.778755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.778952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.779005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.779115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.779143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.779407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.779457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.779622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.779684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.779829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.779881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.780068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.780119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.780288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.780341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.780554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.780603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.780714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.780746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.780968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.781022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.781129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.781156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.781358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.781408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.781539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.781567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.781843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.781893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.782123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.782189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.782526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.782598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.782799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.782848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.783119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.783177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.783340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.783410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.783667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.783693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.783925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.783977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.784138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.784184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.784402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.784473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.784785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.784847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.785118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.785179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.785434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.785512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.785728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.785788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.786073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.786133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.786320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.786388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.786705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.786764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.787089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.787146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.787342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.787414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.787649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.787995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.250 [2024-07-25 10:31:43.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.250 qpair failed and we were unable to recover it. 00:24:54.250 [2024-07-25 10:31:43.788256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.788316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.788650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.788712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.788980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.789039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.789358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.789384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.789618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.789667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.789999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.790057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.790247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.790295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.790567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.790627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.790873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.790899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.791092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.791165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.791348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.791418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.791765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.791826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.792116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.792174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.792467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.792546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.792825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.792895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.793191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.793249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.793620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.793679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.793878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.793946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.794317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.794375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.794548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.794605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.794809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.794885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.795179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.795239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.795532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.795559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.795841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.795899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.796120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.796148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.796456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.796528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.796695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.796754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.797068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.797125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.797424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.797499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.797716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.797741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.798022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.798079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.798386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.798411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.798712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.798771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.799137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.799195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.799532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.799558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.799837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.799863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.800060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.800109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.251 [2024-07-25 10:31:43.800364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.251 [2024-07-25 10:31:43.800414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.251 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.800698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.800749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.800933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.801000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.801311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.801658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.801740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.802145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.802229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.802532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.802561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.802735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.802795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.802915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.802943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.803056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.803083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.803235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.803291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.803564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.803591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.803763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.803826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.804059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.804108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.804364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.804411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.804620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.804669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.804876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.804925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.805068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.805158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.805362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.805414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.805571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.805615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.805852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.805905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.806008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.806034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.806261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.806311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.806491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.806548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.806699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.806752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.806943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.806994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.807221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.807277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.807467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.807522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.807699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.807763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.807917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.807968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.808124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.808151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.808260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.252 [2024-07-25 10:31:43.808286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.252 qpair failed and we were unable to recover it. 00:24:54.252 [2024-07-25 10:31:43.808443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.808469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.808613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.808663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.808816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.808867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.809010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.809055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.809242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.809294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.809397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.809423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.809560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.809612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.809818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.809872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.810042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.810086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.810280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.810306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.810474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.810545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.810649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.810675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.810839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.810888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.811124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.811178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.811374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.811422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.811534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.811562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.811739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.811790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.811918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.811969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.812157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.812208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.812393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.812448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.812675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.812731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.812890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.812917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.813109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.813136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.813306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.813362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.813555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.813605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.813817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.813872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.814036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.814080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.814323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.814352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.814523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.814577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.814715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.814767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.814879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.814906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.815077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.815132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.815306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.815358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.815550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.815577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.815723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.815775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.815941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.816000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.816171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.253 [2024-07-25 10:31:43.816225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.253 qpair failed and we were unable to recover it. 00:24:54.253 [2024-07-25 10:31:43.816380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.816407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.816618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.816671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.816827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.816877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.817058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.817111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.817250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.817302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.817534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.817586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.817819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.817868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.818074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.818207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.818463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.818718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.818851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.818981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.819035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.819157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.819212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.819362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.819414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.819614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.819713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.820058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.820159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.820446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.820557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.820831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.820869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.821238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.821321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.821674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.821763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.822080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.822143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.822370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.822396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.822684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.822750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.822961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.823021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.823229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.823314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.823649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.823721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.823921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.823979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.824182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.824263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.824468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.824537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.824721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.824775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.825028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.825110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.825398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.825468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.825737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.825790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.825949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.825976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.826144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.826198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.826414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.826463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.254 qpair failed and we were unable to recover it. 00:24:54.254 [2024-07-25 10:31:43.826648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.254 [2024-07-25 10:31:43.826703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.826849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.826902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.827072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.827121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.827311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.827375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.827560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.827588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.827721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.827780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.828007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.828060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.828222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.828276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.828843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.828873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.829059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.829113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.829272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.829327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.829498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.829547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.829708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.829756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.830007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.830061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.830256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.830311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.830488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.830515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.830680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.830896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.830949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.831167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.831228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.831511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.831545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.831765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.831820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.832005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.832054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.832234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.832286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.832461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.832531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.832755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.832812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.832920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.832947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.833126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.833180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.833356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.833443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.833645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.833677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.833825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.833876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.833989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.834017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.834215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.834270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.834459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.834495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.834673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.834701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.834806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.834832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.835024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.835079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.835271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.255 [2024-07-25 10:31:43.835327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.255 qpair failed and we were unable to recover it. 00:24:54.255 [2024-07-25 10:31:43.835474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.835541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.835738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.835782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.836054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.836265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.836394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.836615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.836832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.836976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.837030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.837243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.837295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.837538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.837583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.837740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.837766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.837930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.837956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.838169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.838220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.838352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.838413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.838587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.838634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.838800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.838825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.839023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.839075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.839223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.839279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.839446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.839475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.839732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.839783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.839990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.840041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.840290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.840388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.840664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.840732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.840932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.840960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.841125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.841151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.841321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.841368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.841547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.841575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.841739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.841792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.841940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.841994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.842219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.842270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.842373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.842399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.842501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.842527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.842725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.842778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.842928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.842953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.843116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.843172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.843354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.843410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.843556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.843613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.256 qpair failed and we were unable to recover it. 00:24:54.256 [2024-07-25 10:31:43.843816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.256 [2024-07-25 10:31:43.843868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.844103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.844321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.844554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.844686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.844990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.845016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.845164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.845217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.845384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.845441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.845602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.845651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.845844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.845899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.846104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.846149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.846363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.846412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.846562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.846616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.846828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.846880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.847090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.847139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.847312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.847362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.847513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.847563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.847715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.847769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.847929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.847986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.848101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.848130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.848272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.848325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.848536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.848718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.848771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.848988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.849044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.849322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.849396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.849638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.849695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.849899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.849950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.850098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.850259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.850287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.850417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.257 [2024-07-25 10:31:43.850473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.257 qpair failed and we were unable to recover it. 00:24:54.257 [2024-07-25 10:31:43.850692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.850748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.850954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.851003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.851210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.851258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.851444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.851511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.851715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.851769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.851938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.851984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.852186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.852238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.852407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.852456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.852635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.852684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.852852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.852904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.853096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.853263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.853317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.853506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.853555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.853779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.853829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.853972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.854175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.854399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.854566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.854702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.854882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.854933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.855882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.855995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.856123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.856252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.856383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.856590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.856808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.856861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.857079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.857128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.857335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.857387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.857581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.857608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.857779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.857829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.857995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.258 [2024-07-25 10:31:43.858051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.258 qpair failed and we were unable to recover it. 00:24:54.258 [2024-07-25 10:31:43.858159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.858186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.858370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.858418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.858554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.858610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.858769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.858826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.858987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.859042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.859275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.859327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.859434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.859461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.859691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.859718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.859924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.859972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.860175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.860224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.860413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.860465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.860683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.860733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.860913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.860968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.861148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.861199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.861383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.861434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.861616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.861669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.861887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.861941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.862159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.862212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.862363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.862415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.862521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.862547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.862767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.862970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.863020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.863166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.863221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.863389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.863440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.863615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.863643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.863811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.863865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.863975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.864002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.864147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.864200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.864374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.864427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.864607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.864659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.864822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.864848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.864971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.259 [2024-07-25 10:31:43.865135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.259 [2024-07-25 10:31:43.865161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.259 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.865299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.865352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.865554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.865608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.865802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.865855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.866943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.866990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.867151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.867209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.867316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.867341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.867555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.867606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.867829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.867879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.868050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.868102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.868272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.868325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.868431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.868457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.868698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.868755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.868910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.868959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.869130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.869181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.869412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.869466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.869624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.869682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.869896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.869950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.870060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.870088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.870265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.870316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.870498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.870525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.870667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.870715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.870880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.870932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.871087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.871141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.871303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.871331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.871538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.871565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.871757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.871805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.871962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.872017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.872209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.872259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.872414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.872465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.872627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.872687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.872843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.872898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.260 [2024-07-25 10:31:43.873003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.260 [2024-07-25 10:31:43.873029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.260 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.873173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.873225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.873378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.873433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.873605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.873633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.873797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.873824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.873932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.873958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.874150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.874200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.874382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.874439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.874566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.874630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.874799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.874825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.874989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.875043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.875209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.875260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.875422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.875569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.875598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.875700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.875727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.876702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.876914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.876968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.877137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.877189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.877356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.877409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.877575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.877634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.877829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.877881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.878027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.878081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.878273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.878321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.878488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.878516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.878748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.878802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.878990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.879038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.879190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.879243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.879371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.879408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.879566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.879621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.879799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.879852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.880039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.880093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.880199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.880226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.261 [2024-07-25 10:31:43.880407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.261 [2024-07-25 10:31:43.880457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.261 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.880685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.880740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.880978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.881031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.881191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.881240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.881446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.881504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.881728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.881776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.881967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.881995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.882114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.882150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.882333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.882388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.882555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.882611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.882804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.882852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.883061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.883112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.883260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.883524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.883583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.883724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.883778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.883966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.883997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.884205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.884256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.884384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.884440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.884596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.884649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.884864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.885079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.885133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.885306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.885356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.885541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.885596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.885797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.885848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.886024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.886071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.886236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.886264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.886430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.886491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.886703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.886751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.886939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.886984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.887116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.887317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.887372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.887544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.262 [2024-07-25 10:31:43.887572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.262 qpair failed and we were unable to recover it. 00:24:54.262 [2024-07-25 10:31:43.887791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.887840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.888019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.888070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.888243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.888297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.888512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.888563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.888689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.888736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.888868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.888948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.889119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.889147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.889393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.889446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.889619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.889677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.889778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.889804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.889991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.890041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.890191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.890245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.890447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.890503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.890678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.890731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.890855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.890899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.891008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.891037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.891204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.891232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.891433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.891489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.891665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.891716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.891860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.891908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.892119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.892169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.892414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.892464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.892620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.892676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.892845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.892901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.893112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.893161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.893387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.893436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.893579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.893632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.893852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.893903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.894068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.894119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.894324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.894375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.894557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.894719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.894745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.894864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.894908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.895044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.895094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.895329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.895379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.895549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.895602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.895784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.263 [2024-07-25 10:31:43.895835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.263 qpair failed and we were unable to recover it. 00:24:54.263 [2024-07-25 10:31:43.896049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.896099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.896247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.896299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.896506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.896555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.896722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.896779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.896944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.896971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.897175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.897225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.897330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.897357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.897488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.897535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.897732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.897784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.898007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.898055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.898207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.898261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.898370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.898398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.898576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.898629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.898811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.898860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.899942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.899989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.900170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.900224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.900419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.900469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.900623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.900678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.900861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.900906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.901111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.901306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.901442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.901597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.901814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.901991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.902040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.902187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.902242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.902436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.902494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.902652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.264 [2024-07-25 10:31:43.902706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.264 qpair failed and we were unable to recover it. 00:24:54.264 [2024-07-25 10:31:43.902871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.902921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.903079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.903134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.903304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.903555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.903603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.903818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.903875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.904043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.904091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.904302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.904356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.904536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.904585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.904797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.904848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.904994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.905045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.905171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.905226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.905392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.905442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.905667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.905722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.905986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.906036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.906188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.906241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.906412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.906440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.906595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.906648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.906855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.906881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.907136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.907184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.907389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.907439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.907595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.907648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.907783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.907839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.907943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.907969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.908135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.908163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.908357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.908407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.908626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.908676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.908847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.908901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.909067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.909113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.909279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.909333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.909455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.909523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.909721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.909770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.909926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.910154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.910209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.910414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.910473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.910691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.910742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.910915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.910942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.911046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.911072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.265 [2024-07-25 10:31:43.911243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.265 [2024-07-25 10:31:43.911290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.265 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.911543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.911591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.911742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.911794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.912025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.912076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.912210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.912263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.912439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.912501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.912610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.912637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.912851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.912900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.913073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.913124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.913233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.913260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.913459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.913518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.913717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.913764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.913976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.914031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.914179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.914231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.914379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.914435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.914611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.914864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.914915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.915105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.915162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.915343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.915398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.915513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.915540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.915679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.915731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.915938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.915966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.916170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.916220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.916425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.916477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.916644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.916697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.916877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.916929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.917129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.917178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.917348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.917490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.917517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.917721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.917771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.917942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.917994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.266 [2024-07-25 10:31:43.918216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.266 [2024-07-25 10:31:43.918270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.266 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.918405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.918459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.918595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.918653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.918818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.918846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.919073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.919125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.919280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.919337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.919519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.919570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.919735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.919788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.919940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.919993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.920165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.920191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.920346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.920397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.920507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.920534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.920642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.920670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.920843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.920870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.921034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.921091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.921264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.921309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.921445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.921471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.921663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.921714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.921926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.921979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.922139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.922189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.922322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.922373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.922515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.922555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.922720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.922771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.922877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.922902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.923067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.923121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.923281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.923307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.923440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.923500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.923691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.923717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.923922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.923974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.924152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.924203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.924365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.924393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.924558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.924585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.924864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.924941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.925156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.267 [2024-07-25 10:31:43.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.267 qpair failed and we were unable to recover it. 00:24:54.267 [2024-07-25 10:31:43.925426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.925507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.925722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.925782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.926087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.926146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.926415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.926473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.926721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.926748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.926995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.927022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.927270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.927330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.927554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.927615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.927893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.927951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.928142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.928201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.928400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.928459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.928707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.928778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.928963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.929021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.929333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.929391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.929660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.929687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.929972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.930029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.930347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.930408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.930676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.930737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.931000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.931058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.931264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.931326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.931530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.931590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.931779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.931838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.932036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.932093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.932373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.932431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.932638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.932698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.932893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.932953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.933227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.933285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.933506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.933533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.933893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.933951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.934195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.934253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.934531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.934592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.934904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.934962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.935229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.935255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.935507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.935566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.935782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.935809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.936086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.936146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.936385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.936443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.268 [2024-07-25 10:31:43.936715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.268 [2024-07-25 10:31:43.936773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.268 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.937061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.937120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.937290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.937341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.937553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.937596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.937824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.937870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.938093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.938149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.938304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.938348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.938546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.938574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.938824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.938873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.939075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.939101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.939344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.939371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.939535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.939563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.939723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.939749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.940008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.940075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.940361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.940438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.940717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.940776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.941095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.941156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.941368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.941430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.941769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.941819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.942089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.942140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.942353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.942401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.942511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.942538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.942805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.942855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.942991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.943046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.943287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.943337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.943556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.943583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.943753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.943812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.944086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.944134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.944288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.944342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.944533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.944582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.944767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.944794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.945002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.945057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.945202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.945256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.945504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.945569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.945855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.945915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.946193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.946250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.946534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.946872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.269 [2024-07-25 10:31:43.946930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.269 qpair failed and we were unable to recover it. 00:24:54.269 [2024-07-25 10:31:43.947207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.947265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.947423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.947473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.947805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.947831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.948135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.948162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.948531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.948803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.948859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.949020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.949071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.949336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.949386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.949501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.949529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.949747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.949776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.949988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.950037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.950200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.950253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.950415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.950443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.950625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.950676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.950857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.950884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.951112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.951160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.951365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.951423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.951634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.951699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.951944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.952004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.952287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.952345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.952663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.952726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.952926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.952978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.953245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.953304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.953580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.953641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.953990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.954047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.954246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.954296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.954530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.954591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.954849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.954906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.955213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.955272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.955539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.955565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.955905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.955966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.956288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.956351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.956599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.956659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.956934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.956997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.957199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.957260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.957541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.957600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.957896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.270 [2024-07-25 10:31:43.957922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.270 qpair failed and we were unable to recover it. 00:24:54.270 [2024-07-25 10:31:43.958217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.958275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.958552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.958612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.958864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.958890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.959178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.959203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.959509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.959569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.959847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.959908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.960092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.960161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.960379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.960407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.960615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.960667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.960805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.960861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.961056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.961081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.961180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.961205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.961461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.961518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.961708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.961760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.961924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.961952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.962212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.962261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.962445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.962498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.962697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.962748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.963001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.963049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.963242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.963273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.963440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.963501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.963652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.963703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.963842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.963895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.964035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.964084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.964257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.964284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.964512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.964562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.964666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.964693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.964895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.964946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.965139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.965191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.965362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.965414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.965616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.271 [2024-07-25 10:31:43.965670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.271 qpair failed and we were unable to recover it. 00:24:54.271 [2024-07-25 10:31:43.965779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.965806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.965958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.966017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.966190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.966242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.966444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.966502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.966607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.966634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.966876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.966943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.967161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.967414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.967461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.967608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.967636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.967773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.967799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.967918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.967946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.968053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.968078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.968184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.968210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.968426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.968452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.968631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.968690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.968856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.968887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.969071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.969122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.969328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.969378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.969570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.969631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.969784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.969840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.969948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.969975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.970111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.970137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.970272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.970298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.970422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.970449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.970645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.970698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.970835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.970890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.971092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.971142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.971351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.971399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.971509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.971541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.971650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.971678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.971811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.971878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.972113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.972160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.972270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.972297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.972505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.972554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.972715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.972769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.972997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.973047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.973200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.973253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.272 [2024-07-25 10:31:43.973428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.272 [2024-07-25 10:31:43.973477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.272 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.973687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.973732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.973898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.973952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.975464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.975518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.975766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.975813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.975989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.976040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.976150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.976178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.976362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.976412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.976588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.976645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.976773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.976813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.977001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.977058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.977223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.977250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.977383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.977409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.977690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.977740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.977871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.977923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.978863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.978891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.979060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.979089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.979214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.979252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.979426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.979453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.979673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.979726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.979921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.979972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.980169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.980203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.980310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.980338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.980547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.980576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.980695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.980734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.981663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.981695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.981914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.982193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.982248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.982355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.982381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.982628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.982684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.982876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.983056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.983110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.983243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.273 [2024-07-25 10:31:43.983299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.273 qpair failed and we were unable to recover it. 00:24:54.273 [2024-07-25 10:31:43.983407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.983433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.983627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.983678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.983807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.983834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.984963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.984990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.985095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.985121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.985250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.985275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.985539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.985572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.985710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.985766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.985978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.986961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.986988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.987122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.987153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.987272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.987298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.987422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.987464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.987633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.987677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.987823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.987850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.988319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.988545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.988680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.988974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.989146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.989321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.989541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.989774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.989948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.989989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.990121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.990165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.274 [2024-07-25 10:31:43.990333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.274 [2024-07-25 10:31:43.990379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.274 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.990507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.990546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.990669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.990695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.990870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.990931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.991154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.991205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.991398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.991459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.991643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.991688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.991843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.991888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.992034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.992079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.992217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.992263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.992466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.992505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.992719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.992770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.992928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.992962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.994201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.994232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.994393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.994437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.994635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.994667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.994821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.994870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.995019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.995061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.275 [2024-07-25 10:31:43.995186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-07-25 10:31:43.995226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.275 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.995385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.995439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.995680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.995732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.995892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.995947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.996088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.996133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.996335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.996392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.996545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.996585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.996753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.996803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.997037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.997087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.997244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.997287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.997428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.997471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.997679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.997710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.997843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.997900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.998132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.998181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.998321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.998364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.998493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.998520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.998681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.998709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.998890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.998916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.999057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.999105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.999282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.999324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.999455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.999505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.999656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.999683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:43.999870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:43.999901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.000937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.000980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.001159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.001202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.001373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.001418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.001551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.001591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.001734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.001775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.002027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.002058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.002216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.002242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.002417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.002451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.559 [2024-07-25 10:31:44.002615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.559 [2024-07-25 10:31:44.002642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.559 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.002887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.002941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.003939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.003964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.004071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.004103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.004230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.004274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.004465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.004515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.004707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.004754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.004948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.004987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.005204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.005255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.005426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.005472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.005663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.005689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.005934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.005977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.006188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.006238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.006444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.006494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.006634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.006662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.006882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.006922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.007074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.007153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.007304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.007372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.007538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.007586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.007719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.007759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.007917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.007959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.008102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.008145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.008281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.008333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.008477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.008537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.008671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.008711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.008872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.008902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.009080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.009111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.009310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.009353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.009517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.009558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.009693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.009737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.009891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.009936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.010100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.560 [2024-07-25 10:31:44.010141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.560 qpair failed and we were unable to recover it. 00:24:54.560 [2024-07-25 10:31:44.010296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.010346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.010469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.010508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.010643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.010688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.010822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.010865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.010968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.010994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.011121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.011320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.011364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.011504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.011550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.011704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.011742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.011881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.011926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.012079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.012107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.012271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.012318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.012451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.012500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.012649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.012687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.012821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.012867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.013072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.013245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.013445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.013619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.013823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.013984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.014854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.014978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.015852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.015895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.016017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.016058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.016206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.016251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.016383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.016423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.016571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.561 [2024-07-25 10:31:44.016613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.561 qpair failed and we were unable to recover it. 00:24:54.561 [2024-07-25 10:31:44.016724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.016751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.016890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.016937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.017098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.017143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.017344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.017376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.017510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.017539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.017684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.017724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.017866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.017911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.018952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.018993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.019118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.019160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.019290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.019331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.019503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.019560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.019717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.019758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.020726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.020759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.020897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.020925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.021047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.021087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.021214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.021256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.021362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.021399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.021546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.021590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.021842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.021887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.022853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.022879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.023930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.023957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.024075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.024100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.562 [2024-07-25 10:31:44.024227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.562 [2024-07-25 10:31:44.024253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.562 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.024382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.024425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.024556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.024598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.024775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.024825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.024994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.025058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.025243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.025286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.025407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.025447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.026439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.026471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.026638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.026684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.027424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.027455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.027623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.027676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.027786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.027813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.027921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.027947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.028122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.028290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.028424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.028616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.028803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.028998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.029051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.029223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.029266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.029454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.029542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.030514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.030546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.030702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.030747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.031502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.031533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.031707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.031762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.031870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.031897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.032038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.032281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.032345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.032554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.032594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.032744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.563 [2024-07-25 10:31:44.032785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.563 qpair failed and we were unable to recover it. 00:24:54.563 [2024-07-25 10:31:44.032926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.032967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.033103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.033152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.033289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.033340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.033473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.033528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.033663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.033705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.033832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.033876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.034047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.034102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.034251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.034296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.034439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.034484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.034647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.034851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.034903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.035102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.035144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.035337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.035547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.035578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.035699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.035729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.035881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.036935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.036976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.037918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.038180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.038403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.038542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.038685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.038834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.038974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.039015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.039124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.039151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.039338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.039391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.039536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.564 [2024-07-25 10:31:44.039568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.564 qpair failed and we were unable to recover it. 00:24:54.564 [2024-07-25 10:31:44.039733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.039933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.039975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.040862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.040984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.041024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.041182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.041235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.041428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.041529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.041644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.041673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.041857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.041910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.042885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.042917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.043060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.043248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.043467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.043646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.043839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.043988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.044205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.044371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.044558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.044696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.044836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.044862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.045026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.045159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.045381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.045587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.045795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.045974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.046964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.046990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.047113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.565 [2024-07-25 10:31:44.047143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.565 qpair failed and we were unable to recover it. 00:24:54.565 [2024-07-25 10:31:44.047253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.047391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.047417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.047549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.047592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.047731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.047774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.047930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.047957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.048127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.048301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.048436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.048642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.048851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.049145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.049296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.049449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.049754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.049926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.049964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.050090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.050131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.050253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.050294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.050694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.050726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.050876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.050920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.051113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.051474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.051657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.051830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.051961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.052131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.052298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.052458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.052637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.052820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.052862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.053839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.053881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.566 [2024-07-25 10:31:44.054036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.566 [2024-07-25 10:31:44.054062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.566 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.054218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.054260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.054390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.054433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.054550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.054582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.054728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.054770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.054929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.054971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.055887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.055914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.056862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.056893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.057049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.057091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.057250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.057305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.058228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.058260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.058387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.058415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.058564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.058619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.058764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.058806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.058969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.059142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.059320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.059508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.059718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.059915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.059956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.060098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.060141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.060271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.060310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.060444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.060474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.060653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.060697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.060846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.060901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.061062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.061089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.061209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.061255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.061416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.567 [2024-07-25 10:31:44.061443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.567 qpair failed and we were unable to recover it. 00:24:54.567 [2024-07-25 10:31:44.061578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.061639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.062551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.062582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.062766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.062794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.062928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.062971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.063954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.063980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.064788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.064815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.065579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.065609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.065727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.065755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.065885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.065948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.066160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.066320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.066463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.066668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.066854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.066980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.067199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.067406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.067563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.067742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.067921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.067961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.068099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.568 [2024-07-25 10:31:44.068140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.568 qpair failed and we were unable to recover it. 00:24:54.568 [2024-07-25 10:31:44.068289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.068330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.068445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.068478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.068654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.068682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.068806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.068847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.068987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.069161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.069344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.069556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.069737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.069917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.069955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.070928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.070969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.071085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.071126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.071292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.071319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.071444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.071489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.071658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.071712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.072471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.072510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.072704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.072748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.072927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.072973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.073145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.073200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.073306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.073331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.073463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.073504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.074300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.074331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.074444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.074473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.074643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.074701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.074821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.074847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.074992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.075034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.075171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.075215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.075341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.075366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.075508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.075551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.569 [2024-07-25 10:31:44.075661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.569 [2024-07-25 10:31:44.075688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.569 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.075798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.075824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.075944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.075971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.076896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.076937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.077940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.077968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.078936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.078967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.079158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.079215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.079331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.079364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.079518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.079561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.079715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.079769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.079911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.079953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.080066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.080092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.080208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.080234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.080339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.080365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.080470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.080506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.080610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.080636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.081426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.081457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.081639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.081693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.081822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.081848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.081971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.570 [2024-07-25 10:31:44.081997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.570 qpair failed and we were unable to recover it. 00:24:54.570 [2024-07-25 10:31:44.082101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.082243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.082375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.082531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.082712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.082866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.082893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.083894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.083919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.084984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.085148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.085326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.085495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.085705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.085907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.085960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.086893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.086920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.087933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.087975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.088098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.088143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.571 [2024-07-25 10:31:44.088299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.571 [2024-07-25 10:31:44.088340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.571 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.088462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.088518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.088684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.088727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.088829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.088854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.088960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.088988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.089929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.089969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.090098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.090139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.090272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.090314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.090417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.090442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.090610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.090662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.090796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.090850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.091923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.091975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.092079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.092105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.092241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.092283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.092405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.092446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.092617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.092673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.092845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.092898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.093030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.093238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.093425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.093570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.093761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.093960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.094011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.094155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.572 [2024-07-25 10:31:44.094207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.572 qpair failed and we were unable to recover it. 00:24:54.572 [2024-07-25 10:31:44.094366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.094423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.094595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.094639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.094742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.094768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.094933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.094975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.095109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.095149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.095319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.095374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.095549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.095603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.095766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.095818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.095952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.095994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.096182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.096379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.096537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.096730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.096862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.096988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.097027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.097200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.097256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.097432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.097475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.097606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.097632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.097781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.097833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.097995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.098185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.098268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.098465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.098536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.098705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.098758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.098911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.098968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.099112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.099153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.099309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.099362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.099525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.099728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.099755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.099862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.099888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.100077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.100128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.100291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.573 [2024-07-25 10:31:44.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.573 qpair failed and we were unable to recover it. 00:24:54.573 [2024-07-25 10:31:44.100493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.100540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.100658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.100700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.100866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.100895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.101042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.101175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.101376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.101593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.101801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.101964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.102140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.102340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.102523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.102749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.102895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.102920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.103063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.103115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.103287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.103339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.103525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.103553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.103657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.103683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.103845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.103874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.104119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.104328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.104746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.104876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.104981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.105130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.105343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.105531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.105708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.105948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.105989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.106089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.106115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.106278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.106329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.106437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.106463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.106626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.106685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.106860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.106917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.574 [2024-07-25 10:31:44.107088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.574 [2024-07-25 10:31:44.107146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.574 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.107324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.107350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.107457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.107489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.107621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.107664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.107795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.107877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.108108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.108269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.108395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.108585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.108832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.108976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.109029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.109183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.109229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.109383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.109435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.109615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.109675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.109837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.109867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.110909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.110940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.111160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.111356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.111535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.111797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.111952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.112012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.112192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.112234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.112402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.112428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.112551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.112609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.112772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.112825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.112980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.113031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.113206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.113236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.113421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.113477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.113619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.113651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.575 [2024-07-25 10:31:44.113850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.575 [2024-07-25 10:31:44.113903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.575 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.114047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.114088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.114211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.114251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.114415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.114592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.114620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.114764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.115014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.115044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.115225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.115269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.115435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.115496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.115671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.115722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.115883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.115936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.116071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.116132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.116273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.116318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.116463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.116529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.116727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.116780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.116930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.116983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.117154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.117208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.117393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.117442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.117633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.117690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.117799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.117826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.117975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.118185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.118332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.118485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.118697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.118905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.118959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.119087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.119167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.119276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.119303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.119423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.119494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.119671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.119697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.119837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.119883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.120888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.120931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.121036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.576 [2024-07-25 10:31:44.121062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.576 qpair failed and we were unable to recover it. 00:24:54.576 [2024-07-25 10:31:44.121183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.121224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.121330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.121357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.121514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.121559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.121697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.121736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.121915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.121969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.122126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.122169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.122273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.122300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.122462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.122517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.122632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.122660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.122796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.122838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.123010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.123036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.123197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.123260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.123415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.123467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.123620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.123830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.123859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.124083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.124139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.124324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.124369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.124502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.124543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.124684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.124736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.124878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.124930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.125089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.125132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.125318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.125373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.125488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.125514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.125635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.125702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.125886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.125928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.126078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.126281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.126311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.126432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.126462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.126651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.126694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.126872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.126899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.127049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.127101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.127266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.127323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.127503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.127556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.577 [2024-07-25 10:31:44.127763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.577 [2024-07-25 10:31:44.127812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.577 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.127924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.127952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.128060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.128087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.128229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.128281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.128449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.128511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.128635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.128698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.128857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.128887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.129039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.129091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.129250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.129276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.129377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.129404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.129562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.129606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.129764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.129816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.130019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.130075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.130214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.130266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.130458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.130520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.130633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.130659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.130818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.130874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.131091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.131220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.131358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.131571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.131791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.131968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.132022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.132168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.132222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.132389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.132433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.132560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.132587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.132745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.132799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.132969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.133012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.133208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.133251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.133362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.133389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.133582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.133631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.578 [2024-07-25 10:31:44.133746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.578 [2024-07-25 10:31:44.133775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.578 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.133880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.133907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.134113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.134162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.134335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.134390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.134496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.134523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.134675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.134700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.134833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.134876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.135029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.135088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.135215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.135254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.135398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.135452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.135665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.135695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.135857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.135887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.136069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.136115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.136277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.136303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.136463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.136540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.136717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.136768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.136941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.136997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.137180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.137230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.137356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.137397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.137565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.137617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.137835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.137955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.137996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.138171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.138427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.138484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.138642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.138690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.138865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.138923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.139047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.139110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.139235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.139261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.139422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.139452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.139612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.139666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.139838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.139883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.140041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.140067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.140221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.140247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.140453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.140517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.140683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.140710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.140936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.140995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.141131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.141184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.141324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.579 [2024-07-25 10:31:44.141367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.579 qpair failed and we were unable to recover it. 00:24:54.579 [2024-07-25 10:31:44.141470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.141508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.141706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.141732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.141848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.141889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.141994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.142020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.142227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.142274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.142393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.142437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.142611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.142678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.142835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.142883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.142998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.143028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.143146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.143177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.143351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.143403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.143586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.143634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.143827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.143884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.144047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.144079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.144200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.144228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.144334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.144360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.144573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.144627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.144824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.144874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.145034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.145060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.145220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.145245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.145351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.145378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.145575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.145618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.145805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.145849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.146074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.146203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.146395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.146553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.146759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.146981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.147036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.147205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.147248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.147410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.147439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.147602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.147658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.147832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.147889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.148000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.148027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.148205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.148258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.148427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.148488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.148640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.148679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.148836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.580 [2024-07-25 10:31:44.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.580 qpair failed and we were unable to recover it. 00:24:54.580 [2024-07-25 10:31:44.149069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.149118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.149282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.149315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.149442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.149647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.149675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.149838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.149889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.150050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.150077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.150271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.150319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.150477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.150536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.150744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.150793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.150980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.151112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.151243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.151415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.151635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.151846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.151894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.152112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.152141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.152305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.152336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.152456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.152487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.152657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.152684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.152881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.152937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.153081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.153135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.153303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.153330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.153511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.153565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.153697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.153738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.153874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.153916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.154085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.154111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.154303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.154349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.154477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.154521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.154656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.154698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.154829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.154867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.155055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.155097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.155256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.155304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.155460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.155517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.155681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.155721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.155850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.155893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.156020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.156061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.156174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.156200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.581 qpair failed and we were unable to recover it. 00:24:54.581 [2024-07-25 10:31:44.156315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.581 [2024-07-25 10:31:44.156342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.156487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.156513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.156645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.156682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.156838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.156880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.157097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.157143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.157350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.157403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.157561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.157605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.157740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.157781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.157904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.157946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.158052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.158078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.158238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.158281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.158471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.158503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.158661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.158713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.158867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.158910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.159064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.159116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.159240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.159320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.159472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.159510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.159728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.159775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.159945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.159999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.160149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.160191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.160349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.160407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.160547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.160600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.160748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.160940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.160981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.161173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.161416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.161466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.161609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.161664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.161883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.161914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.162065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.582 [2024-07-25 10:31:44.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.582 qpair failed and we were unable to recover it. 00:24:54.582 [2024-07-25 10:31:44.162218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.162244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.162352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.162377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.162492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.162522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.162679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.162723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.162871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.162922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.163081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.163134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.163285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.163327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.163502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.163535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.163720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.163774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.163949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.164003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.164194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.164245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.164406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.164458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.164634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.164695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.164879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.164923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.165036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.165063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.165276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.165319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.165501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.165554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.165723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.165768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.165940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.165983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.166126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.166175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.166358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.166399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.166501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.166527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.166713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.166739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.166929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.166971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.167174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.167200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.167365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.167427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.167604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.167656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.167770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.167801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.167983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.168014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.168211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.168258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.168440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.168500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.168712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.168762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.168958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.169013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.169220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.169272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.583 [2024-07-25 10:31:44.169439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.583 [2024-07-25 10:31:44.169467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.583 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.169637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.169669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.169788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.169814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.169990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.170050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.170154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.170180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.170323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.170379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.170572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.170599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.170773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.170820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.171012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.171057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.171250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.171299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.171511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.171557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.171789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.171842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.172032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.172059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.172269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.172320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.172533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.172562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.172774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.172824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.173060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.173195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.173395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.173617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.173750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.173988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.174044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.174198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.174258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.174428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.174471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.174744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.174793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.174900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.174927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.175137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.175191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.175343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.175391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.175508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.175536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.175727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.175753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.175933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.175959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.176142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.176183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.176380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.176427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.176533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.176559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.176709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.176750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.176870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.176900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.177047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.177100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.177205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.584 [2024-07-25 10:31:44.177233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.584 qpair failed and we were unable to recover it. 00:24:54.584 [2024-07-25 10:31:44.177339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.177366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.177559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.177586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.177721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.177775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.177961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.178233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.178424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.178586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.178736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.178884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.178910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.179100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.179149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.179352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.179395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.179502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.179531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.179644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.179672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.179784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.179814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.180024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.180074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.180243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.180293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.180468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.180527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.180727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.180781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.180892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.180917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.181061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.181109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.181264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.181306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.181533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.181559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.181667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.181694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.181919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.181965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.182168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.182216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.182356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.182385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.182578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.182632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.182803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.182835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.182977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.183030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.183138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.183164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.183318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.183366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.183567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.183804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.183981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.184035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.184240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.184296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.184433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.184460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.184675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.184725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.585 [2024-07-25 10:31:44.184925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.585 [2024-07-25 10:31:44.184977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.585 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.185125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.185178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.185333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.185377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.185620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.185842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.185883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.185988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.186014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.186186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.186213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.186455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.186508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.186723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.186959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.187013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.187183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.187210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.187340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.187396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.187617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.187670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.187887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.187935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.188125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.188168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.188355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.188382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.188572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.188601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.188756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.188786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.188982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.189936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.189987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.190151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.190388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.190444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.190576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.190605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.190747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.190914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.190941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.191098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.191343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.191370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.191564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.191608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.191811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.191861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.192058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.192108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.192250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.192303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.192529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.192579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.192714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.586 [2024-07-25 10:31:44.192765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.586 qpair failed and we were unable to recover it. 00:24:54.586 [2024-07-25 10:31:44.192945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.193001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.193105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.193130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.193297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.193323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.193528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.193576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.193742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.193795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.193950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.194002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.194154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.194184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.194357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.194387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.194564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.194619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.194745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.194798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.195038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.195292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.195322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.195442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.195468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.195704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.195749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.195895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.195936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.196109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.196148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.196326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.196379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.196580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.196629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.196811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.196865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.197001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.197053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.197256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.197303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.197532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.197585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.197779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.197805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.197914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.197940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.198098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.198147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.198306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.198332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.198445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.198473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.198676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.198717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.198851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.198895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.199077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.199104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.199256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.199299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.199513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.587 [2024-07-25 10:31:44.199556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.587 qpair failed and we were unable to recover it. 00:24:54.587 [2024-07-25 10:31:44.199768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.199817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.199966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.200015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.200199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.200225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.200340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.200366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.200540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.200592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.200772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.200833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.201016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.201045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.201192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.201232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.201443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.201500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.201613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.201641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.201873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.201918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.202129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.202160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.202282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.202309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.202418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.202445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.202626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.202676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.202862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.202912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.203173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.203243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.203541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.203568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.203864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.203926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.204145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.204206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.204495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.204552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.204803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.204865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.205145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.205205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.205508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.205569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.205766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.205826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.206119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.206178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.206394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.206453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.206679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.206706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.207009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.207035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.207348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.207407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.207666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.207693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.207908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.207966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.208206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.208264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.208469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.208540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.208777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.208838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.209112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.209170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.209369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.209424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.588 [2024-07-25 10:31:44.209596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.588 [2024-07-25 10:31:44.209658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.588 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.209817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.209870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.210203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.210253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.210530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.210591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.210834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.210892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.211211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.211272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.211561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.211622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.211861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.211919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.212156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.212198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.212408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.212468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.212765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.212817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.212991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.213045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.213263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.213297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.213472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.213510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.213736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.213787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.213982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.214010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.214116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.214143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.214296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.214338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.214518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.214574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.214803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.214868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.215099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.215127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.215351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.215410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.215654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.215713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.215924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.215986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.216145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.216200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.216404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.216637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.216667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.216876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.216928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.217096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.217149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.217304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.217355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.217564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.217614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.217795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.217847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.218016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.218068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.218278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.218340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.218524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.218553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.218706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.218760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.218868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.218894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.219119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.589 [2024-07-25 10:31:44.219167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.589 qpair failed and we were unable to recover it. 00:24:54.589 [2024-07-25 10:31:44.219344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.219401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.219669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.219738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.219903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.219931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.220157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.220207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.220369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.220396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.220509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.220537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.220732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.220784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.220917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.220946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.221140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.221204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.221452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.221542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.221750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.221812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.222104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.222163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.222361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.222402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.222625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.222686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.222961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.223018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.223272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.223318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.223628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.223688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.223962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.224021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.224227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.224287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.224559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.224589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.224711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.224738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.224904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.224954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.225153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.225179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.225380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.225431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.225607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.225651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.225855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.225908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.226135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.226178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.226400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.226448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.226619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.226665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.226840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.226892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.227099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.227151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.227255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.227280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.227452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.227518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.227676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.227729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.227934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.227983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.228105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.228164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.228325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.228350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.228512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.590 [2024-07-25 10:31:44.228539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.590 qpair failed and we were unable to recover it. 00:24:54.590 [2024-07-25 10:31:44.228724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.228765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.228894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.228923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.229171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.229237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.229438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.229725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.229783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.230041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.230067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.230297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.230324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.230572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.230811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.230861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.231020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.231070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.231226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.231252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.231393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.231448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.231612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.231655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.231848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.231890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.232047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.232098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.232246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.232298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.232514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.232553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.232709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.232765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.232913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.232961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.233156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.233181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.233387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.233435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.233626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.233680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.233840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.233884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.234048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.234076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.234223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.234274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.234454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.234502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.234738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.234790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.234979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.235005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.235186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.235239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.235429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.235485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.235694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.235743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.235861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.235889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.236107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.236322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.236551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.236595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.236840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.236899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.237175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.591 [2024-07-25 10:31:44.237530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.591 [2024-07-25 10:31:44.237575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.591 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.237840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.237866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.238102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.238132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.238391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.238416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.238596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.238626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.238734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.238760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.238968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.239016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.239240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.239298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.239514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.239559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.239774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.239823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.240050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.240101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.240262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.240316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.240530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.240558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.240684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.240745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.240957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.241003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.241212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.241258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.241503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.241548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.241762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.241811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.241916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.241942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.242076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.242117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.242252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.242304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.242556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.242606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.242717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.242744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.242901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.242954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.243199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.243229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.243387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.243439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.243639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.243692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.243840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.243881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.244030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.244084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.244257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.244314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.244511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.244558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.244669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.244699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.592 [2024-07-25 10:31:44.244923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.592 [2024-07-25 10:31:44.244976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.592 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.245143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.245190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.245401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.245429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.245617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.245682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.245890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.245949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.246306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.246364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.246567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.246626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.246790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.246851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.247117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.247145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.247309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.247336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.247467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.247529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.247718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.247770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.247961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.248002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.248171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.248197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.248384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.248435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.248696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.248766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.249042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.249100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.249300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.249358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.249553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.249612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.249904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.249929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.250187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.250247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.250553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.250612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.250831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.250890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.251057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.251105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.251278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.251329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.251513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.251557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.251744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.251791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.251978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.252028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.252161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.252215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.252327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.252355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.252523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.252565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.252794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.252858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.253053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.253114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.253323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.253381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.253658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.253718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.253992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.254049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.254209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.254544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.254604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.593 [2024-07-25 10:31:44.254887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.593 [2024-07-25 10:31:44.254948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.593 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.255154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.255214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.255409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.255473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.255689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.255746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.255975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.256032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.256142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.256169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.256339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.256365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.256465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.256501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.256708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.256757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.256982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.257031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.257191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.257246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.257463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.257526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.257707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.257759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.257961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.258184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.258372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.258542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.258700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.258939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.258990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.259094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.259319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.259384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.259702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.259762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.260042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.260102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.260307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.260367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.260568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.260630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.260870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.260899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.261107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.261136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.261355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.261411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.261617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.261667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.261852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.261900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.262076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.262129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.262244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.262271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.262506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.262552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.262754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.262804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.262984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.263031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.263240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.263280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.263526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.263587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.263864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.594 [2024-07-25 10:31:44.263914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.594 qpair failed and we were unable to recover it. 00:24:54.594 [2024-07-25 10:31:44.264101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.264128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.264328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.264381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.264590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.264640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.264846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.264893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.265055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.265108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.265320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.265369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.265541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.265595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.265764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.265807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.265987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.266032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.266222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.266249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.266417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.266470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.266650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.266702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.266900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.266952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.267203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.267253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.267469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.267650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.267707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.267904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.267954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.268150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.268214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.268512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.268555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.268799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.268858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.269078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.269105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.269386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.269447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.269710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.269763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.269932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.269988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.270195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.270248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.270439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.270494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.270658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.270685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.270890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.270938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.271100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.271154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.271343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.271392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.271613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.271666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.271774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.271799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.271966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.271995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.272188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.272239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.272434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.272529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.272757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.272811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.273058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.273116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.595 [2024-07-25 10:31:44.273376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.595 [2024-07-25 10:31:44.273437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.595 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.273741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.273793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.274025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.274078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.274247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.274289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.274502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.274553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.274721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.274773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.274930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.274984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.275141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.275362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.275391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.275563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.275624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.275834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.275883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.276088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.276136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.276333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.276382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.276586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.276634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.276861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.276911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.277022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.277050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.277243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.277292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.277525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.277552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.277719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.277761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.277959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.278109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.278311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.278478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.278689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.278898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.278946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.279159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.279212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.279410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.279459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.279620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.279672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.279862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.279889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.280076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.280126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.280317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.280370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.280532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.280576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.280755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.280801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.596 qpair failed and we were unable to recover it. 00:24:54.596 [2024-07-25 10:31:44.281054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.596 [2024-07-25 10:31:44.281105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.281302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.281352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.281540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.281586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.281756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.281811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.281967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.282017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.282193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.282233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.282440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.282501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.282726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.282778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.283000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.283056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.283233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.283290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.283472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.283525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.283695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.283743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.283960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.284008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.284186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.284236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.284392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.284443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.284621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.284648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.284858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.284915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.285092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.285144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.285353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.285405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.285517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.285544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.285769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.285818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.285984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.286037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.286224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.286252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.286417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.286445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.286651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.286862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.286923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.287149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.287202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.287325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.287352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.287515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.287567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.287772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.287820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.287985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.288042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.288213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.288264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.288424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.288456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.288613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.288666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.288875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.288905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.289069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.289116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.289359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.289409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.597 qpair failed and we were unable to recover it. 00:24:54.597 [2024-07-25 10:31:44.289631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.597 [2024-07-25 10:31:44.289685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.289895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.289947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.290168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.290218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.290433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.290672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.290698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.290870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.290924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.291069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.291123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.291304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.291361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.291601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.291651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1585963 Killed "${NVMF_APP[@]}" "$@" 00:24:54.598 [2024-07-25 10:31:44.291834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.291887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.292062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.292119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:54.598 [2024-07-25 10:31:44.292224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.292250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.598 [2024-07-25 10:31:44.292493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.292546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.598 [2024-07-25 10:31:44.292747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.292798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.293023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.293072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.293272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.293324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.293537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.293569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.293738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.293785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.293903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.293945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.294076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.294119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.294296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.294324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.294568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.294618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.294744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.294810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.295007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.295057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.295272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.295321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.295533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.295583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.295723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.295762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.295920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.295962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.296128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.296179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.296356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.296384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.296531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.296598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.296814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.296863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 [2024-07-25 10:31:44.297052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.297105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1586398 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1586398 00:24:54.598 [2024-07-25 10:31:44.297269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.598 [2024-07-25 10:31:44.297321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.598 qpair failed and we were unable to recover it. 00:24:54.598 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1586398 ']' 00:24:54.599 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.599 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.599 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.599 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.599 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.599 [2024-07-25 10:31:44.299159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.299193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.299345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.299372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.299519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.299550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.299736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.299787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.299894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.299921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.300090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.300142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.300360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.300409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.300538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.300581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.300699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.300741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.300898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.300939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.301902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.301933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.302866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.302896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.303891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.303931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.304931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.304959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.599 [2024-07-25 10:31:44.305152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.599 [2024-07-25 10:31:44.305192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.599 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.305312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.305473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.305522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.305699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.305726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.305883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.305922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.306885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.306914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.307035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.307067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.307242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.307269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.307400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.307428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.307600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.307628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.307849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.307875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.308851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.308978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.309148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.309320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.309521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.309718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.309879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.309906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.310945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.311075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.311102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.600 [2024-07-25 10:31:44.311236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.600 [2024-07-25 10:31:44.311261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.600 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.311429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.311456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.311581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.311609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.311716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.311743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.311875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.312927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.312953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.313933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.313959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.314927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.314953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.315074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.315099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.315207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.315235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.315347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.876 [2024-07-25 10:31:44.315379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.876 qpair failed and we were unable to recover it. 00:24:54.876 [2024-07-25 10:31:44.315514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.315541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.315651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.315676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.315781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.315807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.315942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.315967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.316961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.317910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.317935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.318928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.318954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.319909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.319934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.320883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.320909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.321963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.321989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.322879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.322905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.323887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.323913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.324891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.324922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.325023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.325049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.325159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.325292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.325318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.877 [2024-07-25 10:31:44.325426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.877 [2024-07-25 10:31:44.325451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.877 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.325597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.325625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.325730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.325756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.325874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.325900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.326925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.326952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.327967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.327993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.328872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.328980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.329873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.329901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.330860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.330998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.331945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.331971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.332856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.332987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.333875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.333901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.334925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.334963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.335091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.335120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.335227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.335254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.335391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.335416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.335528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.878 [2024-07-25 10:31:44.335699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.878 [2024-07-25 10:31:44.335726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.878 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.335858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.335885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.336954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.336983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.337901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.337927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.338924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.338950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.339935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.339961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.340875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.340910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.341928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.341955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.342867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.342973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.343954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.343979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.344131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.879 [2024-07-25 10:31:44.344157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.879 qpair failed and we were unable to recover it. 00:24:54.879 [2024-07-25 10:31:44.344303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.344332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.344475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.344509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.344612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.344638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.344745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.344771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.344905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.344932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.345861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.345887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.346938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.346967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.347934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.347960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.348926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.348951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.349935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.349960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.350919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.350946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351703] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:24:54.880 [2024-07-25 10:31:44.351787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351791] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.880 [2024-07-25 10:31:44.351814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.351954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.351983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.352972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.352998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.353120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.353146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.353247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.353273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.880 qpair failed and we were unable to recover it. 00:24:54.880 [2024-07-25 10:31:44.353377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.880 [2024-07-25 10:31:44.353403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.353515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.353542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.353648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.353674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.353785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.353812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.353921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.353948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.354939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.354966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.355953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.355980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.356893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.356918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.357879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.357905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.358929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.358956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.359868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.359895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.360001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.360029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.360146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.360172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.360275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.360301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.360420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.881 [2024-07-25 10:31:44.360445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.881 qpair failed and we were unable to recover it. 00:24:54.881 [2024-07-25 10:31:44.360558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.360585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.360690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.360717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.360822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.360848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.360956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.360983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.361924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.361954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.362888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.363885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.363915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.364952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.364978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.365901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.365931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.366863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.366974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.367852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.367978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.368004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.368114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.882 [2024-07-25 10:31:44.368140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.882 qpair failed and we were unable to recover it. 00:24:54.882 [2024-07-25 10:31:44.368244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.368270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.368377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.368406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.368516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.368543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.368648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.368674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.368816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.368842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.368980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.369862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.369980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.370903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.370933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.371835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.371977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.372878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.372906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.373869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.373895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.374862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.374889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.375912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.375941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.376949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.376976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.377894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.377921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.378058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.883 [2024-07-25 10:31:44.378087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.883 qpair failed and we were unable to recover it. 00:24:54.883 [2024-07-25 10:31:44.378192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.378340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.378500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.378647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.378787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.378939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.378970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.379861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.379890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.380896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.380923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.381894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.381920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.382919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.382946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.383957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.383984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.384941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.384966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.385941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.385970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.386948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.386976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.387098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.387125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.387259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.884 [2024-07-25 10:31:44.387285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.884 qpair failed and we were unable to recover it. 00:24:54.884 [2024-07-25 10:31:44.387406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.387432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.387544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.387572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.387704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.387730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.387852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.387877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.387982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.388969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.388995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.389143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.389302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.389442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.885 [2024-07-25 10:31:44.389594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.389735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.389870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.389992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.390869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.390975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.391858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.391975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.392965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.392994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.393937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.393964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.394887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.394995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.395853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.395878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.396961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.396987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.397112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.397139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.397256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.885 [2024-07-25 10:31:44.397290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.885 qpair failed and we were unable to recover it. 00:24:54.885 [2024-07-25 10:31:44.397399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.397540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.397569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.397680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.397708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.397811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.397837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.397944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.397971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.398953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.398980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.399950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.399978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.400954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.400981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.401944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.401970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.402945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.402977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.403964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.403991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.404944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.404973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.405963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.405989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.406941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.406971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.407080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.407105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.407215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.407243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.407348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.886 [2024-07-25 10:31:44.407373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.886 qpair failed and we were unable to recover it. 00:24:54.886 [2024-07-25 10:31:44.407485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.407512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.407620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.407646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.407757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.407784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.407890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.407915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.408904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.408929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.409960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.409986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.410882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.410908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.411913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.411940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.412905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.412931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.413880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.413906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.414878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.414904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.415934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.415962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.416112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.887 [2024-07-25 10:31:44.416138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.887 qpair failed and we were unable to recover it. 00:24:54.887 [2024-07-25 10:31:44.416245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.416389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.416532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.416667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.416826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.416953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.416979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.417846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.417981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.418893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.418998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.419958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.419984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.420898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.420924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.421855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.421882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.422947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.422973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.888 [2024-07-25 10:31:44.423096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.423940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.423970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.424916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.424943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.425926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.425957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.888 qpair failed and we were unable to recover it. 00:24:54.888 [2024-07-25 10:31:44.426061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.888 [2024-07-25 10:31:44.426086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.426915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.426941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.427944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.427975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.428909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.428934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.429872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.429993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.430875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.430986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.431894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.431919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.432952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.432978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.433931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.433957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.889 [2024-07-25 10:31:44.434946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.889 [2024-07-25 10:31:44.434973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.889 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.435953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.435980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.436887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.436988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.437872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.437898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.438963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.438993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.439133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.439374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.439621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.439761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.439891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.439993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.440942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.440969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.441883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.441910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.442876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.442976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.443968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.443993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.444954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.444979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.890 qpair failed and we were unable to recover it. 00:24:54.890 [2024-07-25 10:31:44.445118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.890 [2024-07-25 10:31:44.445147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.445320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.445455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.445603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.445737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.445871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.445980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.446868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.446894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.447878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.448888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.448916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.449929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.449956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.450890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.450916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.451953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.451979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.452850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.452975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.453874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.453985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.454878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.454904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.891 [2024-07-25 10:31:44.455012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.891 [2024-07-25 10:31:44.455039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.891 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.455870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.455900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.456866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.456893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.457872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.457976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.458908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.458936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.459892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.460917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.460945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.461970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.461999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.462969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.462997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.463921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.463949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.464083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.464112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.464252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.464280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.464387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.892 [2024-07-25 10:31:44.464417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.892 qpair failed and we were unable to recover it. 00:24:54.892 [2024-07-25 10:31:44.464522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.464549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.464656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.464682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.464786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.464811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.464913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.464939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.465872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.465985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.466871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.466898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.467840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.467867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.468890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.468995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.469958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.469988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.470943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.470968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.471928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.471957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.472933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.472959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.473946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.473972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.474074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.474100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.474207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.474234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.474340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.893 [2024-07-25 10:31:44.474366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.893 qpair failed and we were unable to recover it. 00:24:54.893 [2024-07-25 10:31:44.474504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.474531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.474641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.474671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.474779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.474805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.474936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.474961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.475906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.475932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.476882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.476910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.477927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.477953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.478947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.478976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.479954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.479983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.480907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.480935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.481922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.481949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.482890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.482916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.483019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.483044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.483209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.894 [2024-07-25 10:31:44.483235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.894 qpair failed and we were unable to recover it. 00:24:54.894 [2024-07-25 10:31:44.483345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.483371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.483473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.483503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.483607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.483634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.483770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.483796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.483895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.483920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.484918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.484944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.485910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.485937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.486873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.486900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.487868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.488896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.488922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.489885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.489911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.490907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.490933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.491903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.491931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.492882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.492908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.493048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.493074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.493179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.895 [2024-07-25 10:31:44.493205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.895 qpair failed and we were unable to recover it. 00:24:54.895 [2024-07-25 10:31:44.493312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.493340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.493461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.493497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.493606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.493634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.493769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.493796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.493928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.493954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.494923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.494949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.495895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.495920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.496881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.496908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.497948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.497974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.498869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.498978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.499942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.499969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.500927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.500955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.501934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.501960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.502966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.502991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.503123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.503149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.896 [2024-07-25 10:31:44.503255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.896 [2024-07-25 10:31:44.503282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.896 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.503391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.503417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.503561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.503590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.503700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.503726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.503829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.503855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.503960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.503986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.504878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.504981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.505875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.505979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.506874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.506901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.507872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.507987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.508869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.508979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.509863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.509973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.510879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.510991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.511876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.511981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.512007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.512118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.512144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.512282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.512311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.897 qpair failed and we were unable to recover it. 00:24:54.897 [2024-07-25 10:31:44.512421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.897 [2024-07-25 10:31:44.512450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.512570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.512598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.512744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.512770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.512873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.512900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.513950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.513984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.514877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.514903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.515913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.515943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.516835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.516860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.517941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.517969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.518880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.518907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.519906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.519934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.520880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.520907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.521911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.521937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.522040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.522066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.522180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.522206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.522345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.898 [2024-07-25 10:31:44.522372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.898 qpair failed and we were unable to recover it. 00:24:54.898 [2024-07-25 10:31:44.522476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.522507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.522614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.522641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.522772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.522798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.522912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.522938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.523933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.523961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.524933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.524959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.525884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.525916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.526909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.526935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.527967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.527994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.528883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.528986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.529869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.529896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.530889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.530917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.531028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.531056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.531175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.899 [2024-07-25 10:31:44.531307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.899 [2024-07-25 10:31:44.531333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.899 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.531439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.531465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.531582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.531609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.531712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.531743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.531849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.531877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.532880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.532991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.533955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.534840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.534865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.535886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.535912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.536885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.536987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.537872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.537976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.538872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.538899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.539896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.539924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.900 [2024-07-25 10:31:44.540778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.900 qpair failed and we were unable to recover it. 00:24:54.900 [2024-07-25 10:31:44.540908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.540935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.541990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.542882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.542992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.543966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.543992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.544965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.544992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.545875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.545947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.901 [2024-07-25 10:31:44.545988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.901 [2024-07-25 10:31:44.546001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.901 [2024-07-25 10:31:44.546027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.901 [2024-07-25 10:31:44.546028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 [2024-07-25 10:31:44.546040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:54.901 [2024-07-25 10:31:44.546166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:54.901 [2024-07-25 10:31:44.546277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:54.901 [2024-07-25 10:31:44.546303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 [2024-07-25 10:31:44.546205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.546870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.546898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.547969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.547995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.548925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.548951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.549091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.549235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.549376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.549520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.901 [2024-07-25 10:31:44.549648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.901 qpair failed and we were unable to recover it. 00:24:54.901 [2024-07-25 10:31:44.549761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.549786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.549888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.549914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.550903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.550936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.551937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.551964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.552935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.552962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.553909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.553935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.554944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.554972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.555913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.555945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.556908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.556935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.557043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.557070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.557183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.557210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.557332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.557359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.557471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.557505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.902 qpair failed and we were unable to recover it. 00:24:54.902 [2024-07-25 10:31:44.557615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.902 [2024-07-25 10:31:44.557642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.557755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.557782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.557898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.557925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.558891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.558918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.559909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.559938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.560940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.560966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.561963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.561992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.562111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.562139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.562247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.562273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.903 [2024-07-25 10:31:44.562385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.903 [2024-07-25 10:31:44.562410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.903 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.562527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.562669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.562694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.562802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.562829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.562939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.562964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.563943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.563970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.564948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.564975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.565931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.565958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.566967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.566995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.904 qpair failed and we were unable to recover it. 00:24:54.904 [2024-07-25 10:31:44.567107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.904 [2024-07-25 10:31:44.567136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.567933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.567959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.568935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.568962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.569890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.569917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.570888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.905 [2024-07-25 10:31:44.570914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.905 qpair failed and we were unable to recover it. 00:24:54.905 [2024-07-25 10:31:44.571024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.571920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.571946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.572936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.572963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.573938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.573964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.574934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.574963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.575904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.575931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.576039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.906 [2024-07-25 10:31:44.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.906 qpair failed and we were unable to recover it. 00:24:54.906 [2024-07-25 10:31:44.576173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.576877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.576990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.577935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.577962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.578857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.578888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.579857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.579884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.580032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.907 qpair failed and we were unable to recover it. 00:24:54.907 [2024-07-25 10:31:44.580196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.907 [2024-07-25 10:31:44.580223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.580338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.580368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.580503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.580531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.580645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.580674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.580788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.580813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.580923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.580950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.581898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.581924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.582824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.582969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.583972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.583998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.584950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.584977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.585089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.585115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.585222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.585248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.585366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.585393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.908 qpair failed and we were unable to recover it. 00:24:54.908 [2024-07-25 10:31:44.585511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.908 [2024-07-25 10:31:44.585539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.585653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.585679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.585791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.585819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.585930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.585957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.586870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.586898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.587876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.587994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.588968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.588997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.589935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.589962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.909 [2024-07-25 10:31:44.590804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.909 qpair failed and we were unable to recover it. 00:24:54.909 [2024-07-25 10:31:44.590915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.590942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.591961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.591988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.592870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.592895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.593879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.593904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.594876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.594981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.595872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.595982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.910 [2024-07-25 10:31:44.596008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.910 qpair failed and we were unable to recover it. 00:24:54.910 [2024-07-25 10:31:44.596120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.596972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.596997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.597847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.597975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.598867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.598893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.599873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.599903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.600873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.600985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.601011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.601120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.601146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.601252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.911 [2024-07-25 10:31:44.601386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.911 [2024-07-25 10:31:44.601412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.911 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.601532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.601586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.601698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.601725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.601838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.601865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.601988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.602971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.602999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.603970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.603996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.604957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.604985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.912 [2024-07-25 10:31:44.605817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.912 [2024-07-25 10:31:44.605844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.912 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.605959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.605986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.606944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.606971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.607967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.607993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.608879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.608993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.609859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.609886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.610893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.610920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.611031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.611058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.913 [2024-07-25 10:31:44.611163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.913 [2024-07-25 10:31:44.611189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.913 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.611300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.611328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.611437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.611464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.611594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.611621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.611725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.611752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.611866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.611892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.612886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.612994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.613871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.613898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.614879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.614985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.615913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.615940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.616053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.616081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.616193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.616219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.616323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.616350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.616456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.616489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.914 qpair failed and we were unable to recover it. 00:24:54.914 [2024-07-25 10:31:44.616606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.914 [2024-07-25 10:31:44.616632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.616743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.616770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.616880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.616907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.617902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.617929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.618860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.618887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.619896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.619922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.620887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.620913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.915 [2024-07-25 10:31:44.621874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.915 [2024-07-25 10:31:44.621904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.915 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.622901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.622927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.623887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.623914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.624893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.624921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.625873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.625901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.626864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.626980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.916 [2024-07-25 10:31:44.627006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.916 qpair failed and we were unable to recover it. 00:24:54.916 [2024-07-25 10:31:44.627114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.627944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.627970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.628926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.628953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.629881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.629907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.630920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.630946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.631062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.631088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.631194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.631220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.631329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.631464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.917 [2024-07-25 10:31:44.631499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.917 qpair failed and we were unable to recover it. 00:24:54.917 [2024-07-25 10:31:44.631614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.631640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.631740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.631766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.631875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.631901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.632871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.632977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.633968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.633996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.634903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.634941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.635908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.636874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.918 [2024-07-25 10:31:44.636979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.918 [2024-07-25 10:31:44.637005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:54.918 qpair failed and we were unable to recover it. 00:24:54.919 [2024-07-25 10:31:44.637110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.919 [2024-07-25 10:31:44.637136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.637279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.637409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.637558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.637693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.637832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.637975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.638849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.638875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.639011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.181 [2024-07-25 10:31:44.639037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.181 qpair failed and we were unable to recover it. 00:24:55.181 [2024-07-25 10:31:44.639146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.639971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.639998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.640873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.640983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.641959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.641993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.642870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.642976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.643956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.643986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.644101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.644129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.644240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.182 [2024-07-25 10:31:44.644266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.182 qpair failed and we were unable to recover it. 00:24:55.182 [2024-07-25 10:31:44.644374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.644400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.644522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.644551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.644659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.644685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.644794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.644820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.644932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.644959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.645905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.645932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.646908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.646935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.647962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.647988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.648972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.648998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.649100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.649126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.649239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.649266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.649393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.649428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.649566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.649595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.183 [2024-07-25 10:31:44.649720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.183 [2024-07-25 10:31:44.649748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.183 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.649854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.649881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.649986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.650853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.650973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.651870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.651980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.652856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.652976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.653949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.653975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.654893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.654919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.184 qpair failed and we were unable to recover it. 00:24:55.184 [2024-07-25 10:31:44.655031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.184 [2024-07-25 10:31:44.655065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.655198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.655346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.655501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 A controller has encountered a failure and is being reset. 00:24:55.185 [2024-07-25 10:31:44.655658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.655816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.655959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.655987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.656950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.656976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.657950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.657977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.658928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.658956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.659939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.659968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.660083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.660111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.660223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.660249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.185 [2024-07-25 10:31:44.660364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.185 [2024-07-25 10:31:44.660391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.185 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.660501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.660528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.660646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.660672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.660790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.660817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.660969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.661873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.661989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.662952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.662983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.663924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.663951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.664060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.664088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.664196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.664224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.664329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.664355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.664462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.186 [2024-07-25 10:31:44.664496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.186 qpair failed and we were unable to recover it. 00:24:55.186 [2024-07-25 10:31:44.664606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.664633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.664745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.664778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.664887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.665970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.665996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.666941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.666967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.667908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.667936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.668894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.668921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb558000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.187 [2024-07-25 10:31:44.669729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.187 [2024-07-25 10:31:44.669758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.187 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.669872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.669899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.670875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.670994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.671872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.671984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb568000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.672963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.672991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.673131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.673272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.673411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.188 [2024-07-25 10:31:44.673578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.673742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.673844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.673872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:55.188 [2024-07-25 10:31:44.673982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.674116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.188 [2024-07-25 10:31:44.674246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.674384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.188 [2024-07-25 10:31:44.674532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 [2024-07-25 10:31:44.674678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.188 [2024-07-25 10:31:44.674706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.188 qpair failed and we were unable to recover it. 00:24:55.188 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.188 [2024-07-25 10:31:44.674818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.674845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.674956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.674984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.675931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.675958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.676924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.676952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.677060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.677186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.677213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.677312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.677339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed120 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.677447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.677476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb560000b90 with addr=10.0.0.2, port=4420 00:24:55.189 qpair failed and we were unable to recover it. 00:24:55.189 [2024-07-25 10:31:44.677649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.189 [2024-07-25 10:31:44.677690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfb190 with addr=10.0.0.2, port=4420 00:24:55.189 [2024-07-25 10:31:44.677713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb190 is same with the state(5) to be set 00:24:55.189 [2024-07-25 10:31:44.677742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb190 (9): Bad file descriptor 00:24:55.189 [2024-07-25 10:31:44.677763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.189 [2024-07-25 10:31:44.677787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.189 [2024-07-25 10:31:44.677806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.189 Unable to reset the controller. 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 Malloc0 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 [2024-07-25 10:31:44.725170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 [2024-07-25 10:31:44.753454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.189 10:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1585991 00:24:56.123 Controller properly reset. 00:25:01.389 Initializing NVMe Controllers 00:25:01.389 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:01.390 Initialization complete. Launching workers. 00:25:01.390 Starting thread on core 1 00:25:01.390 Starting thread on core 2 00:25:01.390 Starting thread on core 3 00:25:01.390 Starting thread on core 0 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:01.390 00:25:01.390 real 0m10.721s 00:25:01.390 user 0m32.463s 00:25:01.390 sys 0m8.129s 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.390 ************************************ 00:25:01.390 END TEST nvmf_target_disconnect_tc2 00:25:01.390 ************************************ 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.390 rmmod nvme_tcp 00:25:01.390 rmmod nvme_fabrics 00:25:01.390 rmmod nvme_keyring 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1586398 ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1586398 ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586398' 00:25:01.390 killing process with pid 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1586398 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.390 10:31:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.299 00:25:03.299 real 0m15.040s 00:25:03.299 user 0m57.019s 00:25:03.299 sys 0m10.463s 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:03.299 ************************************ 00:25:03.299 END TEST nvmf_target_disconnect 00:25:03.299 ************************************ 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:03.299 00:25:03.299 real 5m0.989s 00:25:03.299 user 11m4.286s 00:25:03.299 sys 1m10.087s 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.299 10:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.299 ************************************ 00:25:03.299 END TEST nvmf_host 00:25:03.299 ************************************ 00:25:03.299 00:25:03.299 real 19m37.143s 00:25:03.299 user 47m10.852s 00:25:03.299 sys 4m40.485s 00:25:03.299 10:31:52 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.299 10:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.299 ************************************ 00:25:03.299 END TEST nvmf_tcp 00:25:03.299 ************************************ 00:25:03.299 10:31:52 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:25:03.299 10:31:53 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:03.299 10:31:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:03.299 10:31:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.299 10:31:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.299 ************************************ 00:25:03.299 START TEST spdkcli_nvmf_tcp 00:25:03.299 ************************************ 00:25:03.299 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:03.559 * Looking for test storage... 00:25:03.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1587333 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1587333 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1587333 ']' 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.559 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.559 [2024-07-25 10:31:53.157508] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:25:03.559 [2024-07-25 10:31:53.157605] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587333 ] 00:25:03.559 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.559 [2024-07-25 10:31:53.218430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:03.822 [2024-07-25 10:31:53.340315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.822 [2024-07-25 10:31:53.340321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.822 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:03.822 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:03.822 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:03.822 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:03.822 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:03.822 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:03.822 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:03.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:03.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:03.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:03.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:03.822 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:03.822 ' 00:25:06.359 [2024-07-25 10:31:56.056423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.742 [2024-07-25 10:31:57.300651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:10.279 [2024-07-25 10:31:59.631848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:12.183 [2024-07-25 10:32:01.609931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:13.567 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:13.567 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:13.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:13.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:13.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:13.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:13.567 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:13.567 10:32:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.135 10:32:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:14.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:14.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:14.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:14.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:14.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:14.135 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:14.135 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:14.135 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:14.135 ' 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:19.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:19.411 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:19.412 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:19.412 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1587333 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1587333 ']' 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1587333 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587333 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587333' 00:25:19.412 killing process with pid 1587333 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1587333 00:25:19.412 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1587333 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1587333 ']' 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1587333 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1587333 ']' 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1587333 00:25:19.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1587333) - No such process 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1587333 is not found' 00:25:19.671 Process with pid 1587333 is not found 00:25:19.671 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:19.672 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:19.672 10:32:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:19.672 00:25:19.672 real 0m16.316s 00:25:19.672 user 0m34.772s 00:25:19.672 sys 0m0.840s 00:25:19.672 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.672 10:32:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.672 ************************************ 00:25:19.672 END TEST spdkcli_nvmf_tcp 00:25:19.672 ************************************ 00:25:19.672 10:32:09 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:19.672 10:32:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.672 10:32:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.672 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.672 ************************************ 00:25:19.672 START TEST nvmf_identify_passthru 00:25:19.672 ************************************ 00:25:19.672 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:19.672 * Looking for test storage... 00:25:19.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.931 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.931 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.931 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.931 10:32:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.931 10:32:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.932 10:32:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.932 10:32:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:19.932 10:32:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.932 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.932 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:19.932 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.932 10:32:09 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.932 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.310 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:21.311 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:21.311 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:21.311 Found net devices under 0000:08:00.0: cvl_0_0 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:21.311 Found net devices under 0000:08:00.1: cvl_0_1 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.311 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:25:21.570 00:25:21.570 --- 10.0.0.2 ping statistics --- 00:25:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.570 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:25:21.570 00:25:21.570 --- 10.0.0.1 ping statistics --- 00:25:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.570 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.570 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:25:21.570 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:84:00.0 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:21.570 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:21.570 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.751 10:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:25:25.751 10:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:25:25.751 10:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:25.751 10:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:25.752 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1590896 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:29.937 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1590896 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1590896 ']' 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.937 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 [2024-07-25 10:32:19.693085] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:25:29.937 [2024-07-25 10:32:19.693183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.195 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.195 [2024-07-25 10:32:19.759161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.195 [2024-07-25 10:32:19.875988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.195 [2024-07-25 10:32:19.876051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.196 [2024-07-25 10:32:19.876067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.196 [2024-07-25 10:32:19.876089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.196 [2024-07-25 10:32:19.876102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.196 [2024-07-25 10:32:19.876211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.196 [2024-07-25 10:32:19.876290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.196 [2024-07-25 10:32:19.876341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.196 [2024-07-25 10:32:19.876344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:25:30.196 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.196 INFO: Log level set to 20 00:25:30.196 INFO: Requests: 00:25:30.196 { 00:25:30.196 "jsonrpc": "2.0", 00:25:30.196 "method": "nvmf_set_config", 00:25:30.196 "id": 1, 00:25:30.196 "params": { 00:25:30.196 "admin_cmd_passthru": { 00:25:30.196 "identify_ctrlr": true 00:25:30.196 } 00:25:30.196 } 00:25:30.196 } 00:25:30.196 00:25:30.196 INFO: response: 00:25:30.196 { 00:25:30.196 "jsonrpc": "2.0", 00:25:30.196 "id": 1, 00:25:30.196 "result": true 00:25:30.196 } 00:25:30.196 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.196 10:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.196 10:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.196 INFO: Setting log level to 20 00:25:30.196 INFO: Setting log level to 20 00:25:30.196 INFO: Log level set to 20 00:25:30.196 INFO: Log level set to 20 00:25:30.196 INFO: Requests: 00:25:30.196 { 00:25:30.196 "jsonrpc": "2.0", 00:25:30.196 "method": "framework_start_init", 00:25:30.196 "id": 1 00:25:30.196 } 00:25:30.196 00:25:30.196 INFO: Requests: 00:25:30.196 { 00:25:30.196 "jsonrpc": "2.0", 00:25:30.196 "method": "framework_start_init", 00:25:30.196 "id": 1 00:25:30.196 } 00:25:30.196 00:25:30.455 [2024-07-25 10:32:20.044602] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:30.455 INFO: response: 00:25:30.455 { 00:25:30.455 "jsonrpc": "2.0", 00:25:30.455 "id": 1, 00:25:30.455 "result": true 00:25:30.455 } 00:25:30.455 00:25:30.455 INFO: response: 00:25:30.455 { 00:25:30.455 "jsonrpc": "2.0", 00:25:30.455 "id": 1, 00:25:30.455 "result": true 00:25:30.455 } 00:25:30.455 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.455 10:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.455 INFO: Setting log level to 40 00:25:30.455 INFO: Setting log level to 40 00:25:30.455 INFO: Setting log level to 40 00:25:30.455 [2024-07-25 10:32:20.054563] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.455 10:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.455 10:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.455 10:32:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 Nvme0n1 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 [2024-07-25 10:32:22.927038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 [ 00:25:33.737 { 00:25:33.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.737 "subtype": "Discovery", 00:25:33.737 "listen_addresses": [], 00:25:33.737 "allow_any_host": true, 00:25:33.737 "hosts": [] 00:25:33.737 }, 00:25:33.737 { 00:25:33.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.737 "subtype": "NVMe", 00:25:33.737 "listen_addresses": [ 00:25:33.737 { 00:25:33.737 "trtype": "TCP", 00:25:33.737 "adrfam": "IPv4", 00:25:33.737 "traddr": "10.0.0.2", 00:25:33.737 "trsvcid": "4420" 00:25:33.737 } 00:25:33.737 ], 00:25:33.737 "allow_any_host": true, 00:25:33.737 "hosts": [], 00:25:33.737 "serial_number": "SPDK00000000000001", 00:25:33.737 "model_number": "SPDK bdev Controller", 00:25:33.737 "max_namespaces": 1, 00:25:33.737 "min_cntlid": 1, 00:25:33.737 "max_cntlid": 65519, 00:25:33.737 "namespaces": [ 00:25:33.737 { 00:25:33.737 "nsid": 1, 00:25:33.737 "bdev_name": "Nvme0n1", 00:25:33.737 "name": "Nvme0n1", 00:25:33.737 "nguid": "CE858BBA6A3F42CEB995DFEB1C87E157", 00:25:33.737 "uuid": "ce858bba-6a3f-42ce-b995-dfeb1c87e157" 00:25:33.737 } 00:25:33.737 ] 00:25:33.737 } 00:25:33.737 ] 00:25:33.737 10:32:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:33.737 10:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:33.737 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:33.737 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:33.737 10:32:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:33.737 rmmod nvme_tcp 00:25:33.737 rmmod nvme_fabrics 00:25:33.737 rmmod nvme_keyring 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1590896 ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1590896 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1590896 ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1590896 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1590896 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1590896' 00:25:33.737 killing process with pid 1590896 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1590896 00:25:33.737 10:32:23 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1590896 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.637 10:32:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.637 10:32:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:35.637 10:32:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.543 10:32:27 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.543 00:25:37.543 real 0m17.665s 00:25:37.543 user 0m26.814s 00:25:37.543 sys 0m2.034s 00:25:37.543 10:32:27 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.543 10:32:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.543 ************************************ 00:25:37.543 END TEST nvmf_identify_passthru 00:25:37.543 ************************************ 00:25:37.544 10:32:27 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:37.544 10:32:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:37.544 10:32:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.544 10:32:27 -- common/autotest_common.sh@10 -- # set +x 00:25:37.544 ************************************ 00:25:37.544 START TEST nvmf_dif 00:25:37.544 ************************************ 00:25:37.544 10:32:27 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:37.544 * Looking for test storage... 00:25:37.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.544 10:32:27 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.544 10:32:27 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.544 10:32:27 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.544 10:32:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.544 10:32:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.544 10:32:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.544 10:32:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:37.544 10:32:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:37.544 10:32:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.544 10:32:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:37.544 10:32:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.544 10:32:27 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.544 10:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:39.451 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.451 10:32:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:39.452 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:39.452 Found net devices under 0000:08:00.0: cvl_0_0 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:39.452 Found net devices under 0000:08:00.1: cvl_0_1 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:25:39.452 00:25:39.452 --- 10.0.0.2 ping statistics --- 00:25:39.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.452 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:25:39.452 00:25:39.452 --- 10.0.0.1 ping statistics --- 00:25:39.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.452 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:39.452 10:32:28 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:40.390 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:40.390 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:40.390 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:40.390 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:40.390 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:40.390 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:40.390 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:40.390 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:40.390 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:40.390 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:40.390 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:40.390 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:40.390 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:40.390 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:40.390 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:40.390 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:40.390 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.390 10:32:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:40.390 10:32:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1593334 00:25:40.390 10:32:29 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1593334 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1593334 ']' 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.390 10:32:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:40.390 [2024-07-25 10:32:30.050572] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:25:40.390 [2024-07-25 10:32:30.050672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.390 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.390 [2024-07-25 10:32:30.117209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.649 [2024-07-25 10:32:30.233373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.649 [2024-07-25 10:32:30.233439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.649 [2024-07-25 10:32:30.233455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.649 [2024-07-25 10:32:30.233468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.649 [2024-07-25 10:32:30.233487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.649 [2024-07-25 10:32:30.233529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:25:40.649 10:32:30 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 10:32:30 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.649 10:32:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:40.649 10:32:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 [2024-07-25 10:32:30.363505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.649 10:32:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 ************************************ 00:25:40.649 START TEST fio_dif_1_default 00:25:40.649 ************************************ 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 bdev_null0 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.649 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 [2024-07-25 10:32:30.423817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:40.907 { 00:25:40.907 "params": { 00:25:40.907 "name": "Nvme$subsystem", 00:25:40.907 "trtype": "$TEST_TRANSPORT", 00:25:40.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.907 "adrfam": "ipv4", 00:25:40.907 "trsvcid": "$NVMF_PORT", 00:25:40.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.907 "hdgst": ${hdgst:-false}, 00:25:40.907 "ddgst": ${ddgst:-false} 00:25:40.907 }, 00:25:40.907 "method": "bdev_nvme_attach_controller" 00:25:40.907 } 00:25:40.907 EOF 00:25:40.907 )") 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:40.907 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:40.908 "params": { 00:25:40.908 "name": "Nvme0", 00:25:40.908 "trtype": "tcp", 00:25:40.908 "traddr": "10.0.0.2", 00:25:40.908 "adrfam": "ipv4", 00:25:40.908 "trsvcid": "4420", 00:25:40.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:40.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:40.908 "hdgst": false, 00:25:40.908 "ddgst": false 00:25:40.908 }, 00:25:40.908 "method": "bdev_nvme_attach_controller" 00:25:40.908 }' 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:40.908 10:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.908 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:40.908 fio-3.35 00:25:40.908 Starting 1 thread 00:25:41.165 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.362 00:25:53.362 filename0: (groupid=0, jobs=1): err= 0: pid=1593511: Thu Jul 25 10:32:41 2024 00:25:53.362 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:25:53.362 slat (nsec): min=7104, max=85388, avg=9378.99, stdev=3595.34 00:25:53.362 clat (usec): min=702, max=45527, avg=21064.78, stdev=20125.88 00:25:53.362 lat (usec): min=710, max=45564, avg=21074.16, stdev=20125.69 00:25:53.362 clat percentiles (usec): 00:25:53.362 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 873], 00:25:53.362 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:25:53.362 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:53.362 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:25:53.362 | 99.99th=[45351] 00:25:53.362 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=760.00, stdev=25.16, samples=20 00:25:53.362 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:25:53.362 lat (usec) : 750=1.10%, 1000=48.69% 00:25:53.362 lat (msec) : 50=50.21% 00:25:53.362 cpu : usr=90.32%, sys=9.29%, ctx=24, majf=0, minf=272 00:25:53.362 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.362 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:53.363 00:25:53.363 Run status group 0 (all jobs): 00:25:53.363 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7616KiB (7799kB), run=10041-10041msec 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 00:25:53.363 real 0m11.115s 00:25:53.363 user 0m10.011s 00:25:53.363 sys 0m1.158s 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 ************************************ 00:25:53.363 END TEST fio_dif_1_default 00:25:53.363 ************************************ 00:25:53.363 10:32:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:53.363 10:32:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:53.363 10:32:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 ************************************ 00:25:53.363 START TEST fio_dif_1_multi_subsystems 00:25:53.363 ************************************ 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 bdev_null0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 [2024-07-25 10:32:41.591376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 bdev_null1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:53.363 { 00:25:53.363 "params": { 00:25:53.363 "name": "Nvme$subsystem", 00:25:53.363 "trtype": "$TEST_TRANSPORT", 00:25:53.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.363 "adrfam": "ipv4", 00:25:53.363 "trsvcid": "$NVMF_PORT", 00:25:53.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.363 "hdgst": ${hdgst:-false}, 00:25:53.363 "ddgst": ${ddgst:-false} 00:25:53.363 }, 00:25:53.363 "method": "bdev_nvme_attach_controller" 00:25:53.363 } 00:25:53.363 EOF 00:25:53.363 )") 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:53.363 { 00:25:53.363 "params": { 00:25:53.363 "name": "Nvme$subsystem", 00:25:53.363 "trtype": "$TEST_TRANSPORT", 00:25:53.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.363 "adrfam": "ipv4", 00:25:53.363 "trsvcid": "$NVMF_PORT", 00:25:53.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.363 "hdgst": ${hdgst:-false}, 00:25:53.363 "ddgst": ${ddgst:-false} 00:25:53.363 }, 00:25:53.363 "method": "bdev_nvme_attach_controller" 00:25:53.363 } 00:25:53.363 EOF 00:25:53.363 )") 00:25:53.363 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:53.364 "params": { 00:25:53.364 "name": "Nvme0", 00:25:53.364 "trtype": "tcp", 00:25:53.364 "traddr": "10.0.0.2", 00:25:53.364 "adrfam": "ipv4", 00:25:53.364 "trsvcid": "4420", 00:25:53.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:53.364 "hdgst": false, 00:25:53.364 "ddgst": false 00:25:53.364 }, 00:25:53.364 "method": "bdev_nvme_attach_controller" 00:25:53.364 },{ 00:25:53.364 "params": { 00:25:53.364 "name": "Nvme1", 00:25:53.364 "trtype": "tcp", 00:25:53.364 "traddr": "10.0.0.2", 00:25:53.364 "adrfam": "ipv4", 00:25:53.364 "trsvcid": "4420", 00:25:53.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.364 "hdgst": false, 00:25:53.364 "ddgst": false 00:25:53.364 }, 00:25:53.364 "method": "bdev_nvme_attach_controller" 00:25:53.364 }' 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:53.364 10:32:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.364 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.364 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.364 fio-3.35 00:25:53.364 Starting 2 threads 00:25:53.364 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.390 00:26:03.390 filename0: (groupid=0, jobs=1): err= 0: pid=1594660: Thu Jul 25 10:32:52 2024 00:26:03.390 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:26:03.390 slat (nsec): min=7393, max=74989, avg=9262.85, stdev=4089.22 00:26:03.390 clat (usec): min=40854, max=42533, avg=40993.95, stdev=148.45 00:26:03.390 lat (usec): min=40862, max=42590, avg=41003.21, stdev=150.04 00:26:03.390 clat percentiles (usec): 00:26:03.390 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:03.390 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:03.390 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.390 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:03.390 | 99.99th=[42730] 00:26:03.390 bw ( KiB/s): min= 384, max= 416, per=49.65%, avg=388.80, stdev=11.72, samples=20 00:26:03.390 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:03.390 lat (msec) : 50=100.00% 00:26:03.390 cpu : usr=94.66%, sys=4.96%, ctx=20, majf=0, minf=213 00:26:03.390 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.390 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.390 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.390 filename1: (groupid=0, jobs=1): err= 0: pid=1594661: Thu Jul 25 10:32:52 2024 00:26:03.390 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10012msec) 00:26:03.390 slat (nsec): min=7340, max=56722, avg=8938.33, stdev=2720.62 00:26:03.390 clat (usec): min=1120, max=42944, avg=40836.05, stdev=2549.75 00:26:03.390 lat (usec): min=1141, max=42958, avg=40844.99, stdev=2549.05 00:26:03.390 clat percentiles (usec): 00:26:03.390 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:03.390 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:03.390 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.390 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:26:03.390 | 99.99th=[42730] 00:26:03.390 bw ( KiB/s): min= 384, max= 416, per=49.91%, avg=390.40, stdev=13.13, samples=20 00:26:03.390 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:26:03.390 lat (msec) : 2=0.41%, 50=99.59% 00:26:03.390 cpu : usr=94.23%, sys=5.38%, ctx=22, majf=0, minf=80 00:26:03.390 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.390 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.390 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.390 00:26:03.390 Run status group 0 (all jobs): 00:26:03.390 READ: bw=781KiB/s (800kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10010-10012msec 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 00:26:03.390 real 0m11.404s 00:26:03.390 user 0m20.161s 00:26:03.390 sys 0m1.295s 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 ************************************ 00:26:03.390 END TEST fio_dif_1_multi_subsystems 00:26:03.390 ************************************ 00:26:03.390 10:32:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:03.390 10:32:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:03.390 10:32:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:03.390 10:32:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 ************************************ 00:26:03.390 START TEST fio_dif_rand_params 00:26:03.390 ************************************ 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 bdev_null0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.390 [2024-07-25 10:32:53.038649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.390 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:03.391 { 00:26:03.391 "params": { 00:26:03.391 "name": "Nvme$subsystem", 00:26:03.391 "trtype": "$TEST_TRANSPORT", 00:26:03.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.391 "adrfam": "ipv4", 00:26:03.391 "trsvcid": "$NVMF_PORT", 00:26:03.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.391 "hdgst": ${hdgst:-false}, 00:26:03.391 "ddgst": ${ddgst:-false} 00:26:03.391 }, 00:26:03.391 "method": "bdev_nvme_attach_controller" 00:26:03.391 } 00:26:03.391 EOF 00:26:03.391 )") 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:03.391 "params": { 00:26:03.391 "name": "Nvme0", 00:26:03.391 "trtype": "tcp", 00:26:03.391 "traddr": "10.0.0.2", 00:26:03.391 "adrfam": "ipv4", 00:26:03.391 "trsvcid": "4420", 00:26:03.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.391 "hdgst": false, 00:26:03.391 "ddgst": false 00:26:03.391 }, 00:26:03.391 "method": "bdev_nvme_attach_controller" 00:26:03.391 }' 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:03.391 10:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.649 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:03.649 ... 00:26:03.649 fio-3.35 00:26:03.649 Starting 3 threads 00:26:03.649 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.217 00:26:10.217 filename0: (groupid=0, jobs=1): err= 0: pid=1595735: Thu Jul 25 10:32:58 2024 00:26:10.217 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(127MiB/5006msec) 00:26:10.217 slat (nsec): min=6334, max=63302, avg=17611.82, stdev=5358.61 00:26:10.217 clat (usec): min=5969, max=54568, avg=14766.56, stdev=10825.91 00:26:10.217 lat (usec): min=5982, max=54582, avg=14784.17, stdev=10825.85 00:26:10.217 clat percentiles (usec): 00:26:10.217 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9241], 00:26:10.217 | 30.00th=[10159], 40.00th=[11731], 50.00th=[12387], 60.00th=[13042], 00:26:10.217 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16057], 95.00th=[50594], 00:26:10.217 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:26:10.217 | 99.99th=[54789] 00:26:10.217 bw ( KiB/s): min=16896, max=33280, per=33.69%, avg=25932.80, stdev=5020.98, samples=10 00:26:10.217 iops : min= 132, max= 260, avg=202.60, stdev=39.23, samples=10 00:26:10.217 lat (msec) : 10=29.16%, 20=63.15%, 50=1.18%, 100=6.50% 00:26:10.217 cpu : usr=94.81%, sys=4.80%, ctx=10, majf=0, minf=108 00:26:10.217 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 issued rwts: total=1015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.217 filename0: (groupid=0, jobs=1): err= 0: pid=1595736: Thu Jul 25 10:32:58 2024 00:26:10.217 read: IOPS=180, BW=22.6MiB/s (23.6MB/s)(114MiB/5044msec) 00:26:10.217 slat (nsec): min=5421, max=38666, avg=17143.68, stdev=4090.93 00:26:10.217 clat (usec): min=5422, max=91149, avg=16560.30, stdev=13329.76 00:26:10.217 lat (usec): min=5435, max=91166, avg=16577.44, stdev=13330.07 00:26:10.217 clat percentiles (usec): 00:26:10.217 | 1.00th=[ 5997], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[ 9372], 00:26:10.217 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:26:10.217 | 70.00th=[13960], 80.00th=[14746], 90.00th=[50070], 95.00th=[52691], 00:26:10.217 | 99.00th=[54789], 99.50th=[55837], 99.90th=[90702], 99.95th=[90702], 00:26:10.217 | 99.99th=[90702] 00:26:10.217 bw ( KiB/s): min=18432, max=28672, per=30.17%, avg=23223.70, stdev=3233.14, samples=10 00:26:10.217 iops : min= 144, max= 224, avg=181.40, stdev=25.26, samples=10 00:26:10.217 lat (msec) : 10=23.41%, 20=65.16%, 50=0.88%, 100=10.55% 00:26:10.217 cpu : usr=95.06%, sys=4.48%, ctx=13, majf=0, minf=93 00:26:10.217 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 issued rwts: total=910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.217 filename0: (groupid=0, jobs=1): err= 0: pid=1595737: Thu Jul 25 10:32:58 2024 00:26:10.217 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(139MiB/5018msec) 00:26:10.217 slat (nsec): min=5554, max=55003, avg=20761.17, stdev=4474.16 00:26:10.217 clat (usec): min=6069, max=58397, avg=13561.62, stdev=5925.13 00:26:10.217 lat (usec): min=6084, max=58419, avg=13582.38, stdev=5925.52 00:26:10.217 clat percentiles (usec): 00:26:10.217 | 1.00th=[ 6521], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[10028], 00:26:10.217 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12780], 60.00th=[14091], 00:26:10.217 | 70.00th=[15139], 80.00th=[16909], 90.00th=[18482], 95.00th=[19268], 00:26:10.217 | 99.00th=[50594], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:26:10.217 | 99.99th=[58459] 00:26:10.217 bw ( KiB/s): min=24368, max=33280, per=36.76%, avg=28292.80, stdev=2583.67, samples=10 00:26:10.217 iops : min= 190, max= 260, avg=221.00, stdev=20.25, samples=10 00:26:10.217 lat (msec) : 10=20.22%, 20=77.08%, 50=1.44%, 100=1.26% 00:26:10.217 cpu : usr=93.50%, sys=5.70%, ctx=39, majf=0, minf=120 00:26:10.217 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.217 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.217 00:26:10.217 Run status group 0 (all jobs): 00:26:10.217 READ: bw=75.2MiB/s (78.8MB/s), 22.6MiB/s-27.6MiB/s (23.6MB/s-28.9MB/s), io=379MiB (398MB), run=5006-5044msec 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.217 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 bdev_null0 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 [2024-07-25 10:32:59.221051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 bdev_null1 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 bdev_null2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:10.218 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.218 { 00:26:10.218 "params": { 00:26:10.218 "name": "Nvme$subsystem", 00:26:10.218 "trtype": "$TEST_TRANSPORT", 00:26:10.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.218 "adrfam": "ipv4", 00:26:10.218 "trsvcid": "$NVMF_PORT", 00:26:10.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.218 "hdgst": ${hdgst:-false}, 00:26:10.218 "ddgst": ${ddgst:-false} 00:26:10.218 }, 00:26:10.218 "method": "bdev_nvme_attach_controller" 00:26:10.218 } 00:26:10.219 EOF 00:26:10.219 )") 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.219 { 00:26:10.219 "params": { 00:26:10.219 "name": "Nvme$subsystem", 00:26:10.219 "trtype": "$TEST_TRANSPORT", 00:26:10.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.219 "adrfam": "ipv4", 00:26:10.219 "trsvcid": "$NVMF_PORT", 00:26:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.219 "hdgst": ${hdgst:-false}, 00:26:10.219 "ddgst": ${ddgst:-false} 00:26:10.219 }, 00:26:10.219 "method": "bdev_nvme_attach_controller" 00:26:10.219 } 00:26:10.219 EOF 00:26:10.219 )") 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.219 { 00:26:10.219 "params": { 00:26:10.219 "name": "Nvme$subsystem", 00:26:10.219 "trtype": "$TEST_TRANSPORT", 00:26:10.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.219 "adrfam": "ipv4", 00:26:10.219 "trsvcid": "$NVMF_PORT", 00:26:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.219 "hdgst": ${hdgst:-false}, 00:26:10.219 "ddgst": ${ddgst:-false} 00:26:10.219 }, 00:26:10.219 "method": "bdev_nvme_attach_controller" 00:26:10.219 } 00:26:10.219 EOF 00:26:10.219 )") 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:10.219 "params": { 00:26:10.219 "name": "Nvme0", 00:26:10.219 "trtype": "tcp", 00:26:10.219 "traddr": "10.0.0.2", 00:26:10.219 "adrfam": "ipv4", 00:26:10.219 "trsvcid": "4420", 00:26:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.219 "hdgst": false, 00:26:10.219 "ddgst": false 00:26:10.219 }, 00:26:10.219 "method": "bdev_nvme_attach_controller" 00:26:10.219 },{ 00:26:10.219 "params": { 00:26:10.219 "name": "Nvme1", 00:26:10.219 "trtype": "tcp", 00:26:10.219 "traddr": "10.0.0.2", 00:26:10.219 "adrfam": "ipv4", 00:26:10.219 "trsvcid": "4420", 00:26:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:10.219 "hdgst": false, 00:26:10.219 "ddgst": false 00:26:10.219 }, 00:26:10.219 "method": "bdev_nvme_attach_controller" 00:26:10.219 },{ 00:26:10.219 "params": { 00:26:10.219 "name": "Nvme2", 00:26:10.219 "trtype": "tcp", 00:26:10.219 "traddr": "10.0.0.2", 00:26:10.219 "adrfam": "ipv4", 00:26:10.219 "trsvcid": "4420", 00:26:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:10.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:10.219 "hdgst": false, 00:26:10.219 "ddgst": false 00:26:10.219 }, 00:26:10.219 "method": "bdev_nvme_attach_controller" 00:26:10.219 }' 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:10.219 10:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.219 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:10.219 ... 00:26:10.219 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:10.219 ... 00:26:10.219 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:10.219 ... 00:26:10.219 fio-3.35 00:26:10.219 Starting 24 threads 00:26:10.219 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.439 00:26:22.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596422: Thu Jul 25 10:33:10 2024 00:26:22.439 read: IOPS=194, BW=777KiB/s (796kB/s)(7912KiB/10184msec) 00:26:22.439 slat (nsec): min=4920, max=60714, avg=11828.47, stdev=5389.39 00:26:22.439 clat (msec): min=4, max=384, avg=82.05, stdev=95.93 00:26:22.439 lat (msec): min=4, max=384, avg=82.06, stdev=95.94 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 27], 00:26:22.439 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 31], 60.00th=[ 32], 00:26:22.439 | 70.00th=[ 42], 80.00th=[ 220], 90.00th=[ 262], 95.00th=[ 284], 00:26:22.439 | 99.00th=[ 305], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:26:22.439 | 99.99th=[ 384] 00:26:22.439 bw ( KiB/s): min= 208, max= 2272, per=6.13%, avg=784.65, stdev=817.99, samples=20 00:26:22.439 iops : min= 52, max= 568, avg=196.10, stdev=204.53, samples=20 00:26:22.439 lat (msec) : 10=0.81%, 20=5.97%, 50=67.44%, 100=1.52%, 250=11.73% 00:26:22.439 lat (msec) : 500=12.54% 00:26:22.439 cpu : usr=98.52%, sys=1.07%, ctx=27, majf=0, minf=64 00:26:22.439 IO depths : 1=0.4%, 2=1.0%, 4=7.6%, 8=78.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=89.1%, 8=5.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 issued rwts: total=1978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596423: Thu Jul 25 10:33:10 2024 00:26:22.439 read: IOPS=137, BW=552KiB/s (565kB/s)(5592KiB/10131msec) 00:26:22.439 slat (usec): min=8, max=159, avg=87.06, stdev=35.55 00:26:22.439 clat (msec): min=25, max=505, avg=115.20, stdev=122.67 00:26:22.439 lat (msec): min=25, max=505, avg=115.29, stdev=122.65 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 27], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.439 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.439 | 70.00th=[ 46], 80.00th=[ 239], 90.00th=[ 338], 95.00th=[ 393], 00:26:22.439 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 506], 99.95th=[ 506], 00:26:22.439 | 99.99th=[ 506] 00:26:22.439 bw ( KiB/s): min= 127, max= 1536, per=4.31%, avg=552.55, stdev=559.08, samples=20 00:26:22.439 iops : min= 31, max= 384, avg=138.05, stdev=139.80, samples=20 00:26:22.439 lat (msec) : 50=70.24%, 250=13.73%, 500=15.88%, 750=0.14% 00:26:22.439 cpu : usr=98.23%, sys=1.13%, ctx=44, majf=0, minf=46 00:26:22.439 IO depths : 1=4.7%, 2=10.4%, 4=23.4%, 8=53.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 issued rwts: total=1398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596424: Thu Jul 25 10:33:10 2024 00:26:22.439 read: IOPS=135, BW=541KiB/s (554kB/s)(5504KiB/10182msec) 00:26:22.439 slat (usec): min=5, max=181, avg=67.16, stdev=40.95 00:26:22.439 clat (msec): min=26, max=419, avg=117.81, stdev=124.87 00:26:22.439 lat (msec): min=26, max=419, avg=117.88, stdev=124.86 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.439 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.439 | 70.00th=[ 114], 80.00th=[ 241], 90.00th=[ 376], 95.00th=[ 388], 00:26:22.439 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:26:22.439 | 99.99th=[ 422] 00:26:22.439 bw ( KiB/s): min= 128, max= 1536, per=4.24%, avg=543.95, stdev=553.88, samples=20 00:26:22.439 iops : min= 32, max= 384, avg=135.95, stdev=138.48, samples=20 00:26:22.439 lat (msec) : 50=69.48%, 100=0.29%, 250=11.63%, 500=18.60% 00:26:22.439 cpu : usr=98.32%, sys=1.14%, ctx=51, majf=0, minf=37 00:26:22.439 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596425: Thu Jul 25 10:33:10 2024 00:26:22.439 read: IOPS=128, BW=513KiB/s (525kB/s)(5184KiB/10102msec) 00:26:22.439 slat (usec): min=16, max=151, avg=52.70, stdev=34.18 00:26:22.439 clat (msec): min=34, max=694, avg=124.23, stdev=146.74 00:26:22.439 lat (msec): min=34, max=694, avg=124.29, stdev=146.76 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.439 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.439 | 70.00th=[ 44], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 409], 00:26:22.439 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 693], 99.95th=[ 693], 00:26:22.439 | 99.99th=[ 693] 00:26:22.439 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=538.95, stdev=585.92, samples=19 00:26:22.439 iops : min= 32, max= 384, avg=134.74, stdev=146.48, samples=19 00:26:22.439 lat (msec) : 50=73.92%, 100=0.15%, 250=5.09%, 500=19.60%, 750=1.23% 00:26:22.439 cpu : usr=97.35%, sys=1.49%, ctx=159, majf=0, minf=53 00:26:22.439 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596426: Thu Jul 25 10:33:10 2024 00:26:22.439 read: IOPS=129, BW=517KiB/s (529kB/s)(5248KiB/10153msec) 00:26:22.439 slat (usec): min=9, max=176, avg=50.47, stdev=35.22 00:26:22.439 clat (msec): min=25, max=497, avg=123.38, stdev=143.62 00:26:22.439 lat (msec): min=25, max=497, avg=123.44, stdev=143.61 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 27], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.439 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:26:22.439 | 70.00th=[ 46], 80.00th=[ 330], 90.00th=[ 384], 95.00th=[ 401], 00:26:22.439 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:26:22.439 | 99.99th=[ 498] 00:26:22.439 bw ( KiB/s): min= 127, max= 1536, per=4.05%, avg=518.10, stdev=582.94, samples=20 00:26:22.439 iops : min= 31, max= 384, avg=129.45, stdev=145.79, samples=20 00:26:22.439 lat (msec) : 50=70.27%, 100=3.96%, 250=2.74%, 500=23.02% 00:26:22.439 cpu : usr=97.30%, sys=1.73%, ctx=133, majf=0, minf=51 00:26:22.439 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596427: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=129, BW=518KiB/s (531kB/s)(5248KiB/10126msec) 00:26:22.440 slat (usec): min=7, max=215, avg=54.48, stdev=40.67 00:26:22.440 clat (msec): min=29, max=569, avg=123.04, stdev=144.11 00:26:22.440 lat (msec): min=29, max=569, avg=123.09, stdev=144.13 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 45], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 401], 00:26:22.440 | 99.00th=[ 472], 99.50th=[ 506], 99.90th=[ 567], 99.95th=[ 567], 00:26:22.440 | 99.99th=[ 567] 00:26:22.440 bw ( KiB/s): min= 112, max= 1536, per=4.05%, avg=518.40, stdev=599.48, samples=20 00:26:22.440 iops : min= 28, max= 384, avg=129.60, stdev=149.87, samples=20 00:26:22.440 lat (msec) : 50=73.02%, 100=1.37%, 250=3.35%, 500=21.65%, 750=0.61% 00:26:22.440 cpu : usr=95.60%, sys=2.44%, ctx=308, majf=0, minf=66 00:26:22.440 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596428: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=135, BW=543KiB/s (556kB/s)(5512KiB/10152msec) 00:26:22.440 slat (usec): min=8, max=175, avg=40.35, stdev=40.54 00:26:22.440 clat (msec): min=27, max=501, avg=117.30, stdev=123.73 00:26:22.440 lat (msec): min=27, max=501, avg=117.34, stdev=123.73 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:26:22.440 | 70.00th=[ 46], 80.00th=[ 255], 90.00th=[ 326], 95.00th=[ 384], 00:26:22.440 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 502], 99.95th=[ 502], 00:26:22.440 | 99.99th=[ 502] 00:26:22.440 bw ( KiB/s): min= 127, max= 1536, per=4.25%, avg=544.55, stdev=557.93, samples=20 00:26:22.440 iops : min= 31, max= 384, avg=136.05, stdev=139.52, samples=20 00:26:22.440 lat (msec) : 50=70.25%, 250=8.42%, 500=21.19%, 750=0.15% 00:26:22.440 cpu : usr=96.70%, sys=1.85%, ctx=138, majf=0, minf=43 00:26:22.440 IO depths : 1=4.8%, 2=10.2%, 4=22.3%, 8=55.0%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596429: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=130, BW=523KiB/s (535kB/s)(5312KiB/10159msec) 00:26:22.440 slat (usec): min=6, max=234, avg=90.76, stdev=32.71 00:26:22.440 clat (msec): min=34, max=471, avg=121.67, stdev=135.11 00:26:22.440 lat (msec): min=34, max=471, avg=121.76, stdev=135.11 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 45], 80.00th=[ 284], 90.00th=[ 376], 95.00th=[ 393], 00:26:22.440 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 472], 99.95th=[ 472], 00:26:22.440 | 99.99th=[ 472] 00:26:22.440 bw ( KiB/s): min= 127, max= 1536, per=4.10%, avg=524.70, stdev=577.85, samples=20 00:26:22.440 iops : min= 31, max= 384, avg=131.10, stdev=144.51, samples=20 00:26:22.440 lat (msec) : 50=72.29%, 250=6.02%, 500=21.69% 00:26:22.440 cpu : usr=95.75%, sys=2.28%, ctx=109, majf=0, minf=49 00:26:22.440 IO depths : 1=3.1%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename1: (groupid=0, jobs=1): err= 0: pid=1596430: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=128, BW=512KiB/s (524kB/s)(5184KiB/10123msec) 00:26:22.440 slat (usec): min=18, max=166, avg=106.68, stdev=18.10 00:26:22.440 clat (msec): min=31, max=569, avg=123.88, stdev=146.50 00:26:22.440 lat (msec): min=31, max=569, avg=123.99, stdev=146.50 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 44], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 409], 00:26:22.440 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:26:22.440 | 99.99th=[ 567] 00:26:22.440 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=538.95, stdev=586.74, samples=19 00:26:22.440 iops : min= 32, max= 384, avg=134.74, stdev=146.69, samples=19 00:26:22.440 lat (msec) : 50=74.07%, 250=4.63%, 500=19.91%, 750=1.39% 00:26:22.440 cpu : usr=97.53%, sys=1.46%, ctx=102, majf=0, minf=44 00:26:22.440 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename1: (groupid=0, jobs=1): err= 0: pid=1596431: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=129, BW=518KiB/s (531kB/s)(5248KiB/10126msec) 00:26:22.440 slat (usec): min=11, max=158, avg=101.90, stdev=26.21 00:26:22.440 clat (msec): min=28, max=557, avg=122.58, stdev=144.57 00:26:22.440 lat (msec): min=28, max=557, avg=122.68, stdev=144.56 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 44], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 401], 00:26:22.440 | 99.00th=[ 472], 99.50th=[ 523], 99.90th=[ 558], 99.95th=[ 558], 00:26:22.440 | 99.99th=[ 558] 00:26:22.440 bw ( KiB/s): min= 112, max= 1536, per=4.05%, avg=518.40, stdev=599.48, samples=20 00:26:22.440 iops : min= 28, max= 384, avg=129.60, stdev=149.87, samples=20 00:26:22.440 lat (msec) : 50=73.32%, 100=1.22%, 250=3.20%, 500=21.49%, 750=0.76% 00:26:22.440 cpu : usr=96.78%, sys=1.86%, ctx=460, majf=0, minf=50 00:26:22.440 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename1: (groupid=0, jobs=1): err= 0: pid=1596432: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=128, BW=512KiB/s (524kB/s)(5184KiB/10123msec) 00:26:22.440 slat (usec): min=15, max=158, avg=42.89, stdev=31.21 00:26:22.440 clat (msec): min=40, max=569, avg=124.40, stdev=146.32 00:26:22.440 lat (msec): min=41, max=569, avg=124.45, stdev=146.34 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 42], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 44], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 409], 00:26:22.440 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:26:22.440 | 99.99th=[ 567] 00:26:22.440 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=538.95, stdev=585.75, samples=19 00:26:22.440 iops : min= 32, max= 384, avg=134.74, stdev=146.44, samples=19 00:26:22.440 lat (msec) : 50=74.07%, 250=4.94%, 500=19.60%, 750=1.39% 00:26:22.440 cpu : usr=96.61%, sys=1.95%, ctx=182, majf=0, minf=61 00:26:22.440 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename1: (groupid=0, jobs=1): err= 0: pid=1596433: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=130, BW=523KiB/s (535kB/s)(5312KiB/10159msec) 00:26:22.440 slat (usec): min=5, max=166, avg=100.27, stdev=25.95 00:26:22.440 clat (msec): min=29, max=470, avg=121.54, stdev=137.23 00:26:22.440 lat (msec): min=29, max=470, avg=121.64, stdev=137.23 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 45], 80.00th=[ 309], 90.00th=[ 372], 95.00th=[ 401], 00:26:22.440 | 99.00th=[ 409], 99.50th=[ 456], 99.90th=[ 472], 99.95th=[ 472], 00:26:22.440 | 99.99th=[ 472] 00:26:22.440 bw ( KiB/s): min= 128, max= 1536, per=4.10%, avg=524.75, stdev=564.56, samples=20 00:26:22.440 iops : min= 32, max= 384, avg=131.15, stdev=141.16, samples=20 00:26:22.440 lat (msec) : 50=72.14%, 100=1.36%, 250=4.97%, 500=21.54% 00:26:22.440 cpu : usr=97.41%, sys=1.47%, ctx=61, majf=0, minf=65 00:26:22.440 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.440 filename1: (groupid=0, jobs=1): err= 0: pid=1596434: Thu Jul 25 10:33:10 2024 00:26:22.440 read: IOPS=130, BW=521KiB/s (533kB/s)(5304KiB/10184msec) 00:26:22.440 slat (usec): min=4, max=223, avg=102.81, stdev=23.52 00:26:22.440 clat (msec): min=26, max=558, avg=121.76, stdev=138.83 00:26:22.440 lat (msec): min=26, max=558, avg=121.86, stdev=138.82 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.440 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.440 | 70.00th=[ 44], 80.00th=[ 275], 90.00th=[ 376], 95.00th=[ 397], 00:26:22.440 | 99.00th=[ 481], 99.50th=[ 535], 99.90th=[ 558], 99.95th=[ 558], 00:26:22.441 | 99.99th=[ 558] 00:26:22.441 bw ( KiB/s): min= 127, max= 1536, per=4.09%, avg=523.90, stdev=564.65, samples=20 00:26:22.441 iops : min= 31, max= 384, avg=130.90, stdev=141.21, samples=20 00:26:22.441 lat (msec) : 50=72.55%, 250=5.73%, 500=20.81%, 750=0.90% 00:26:22.441 cpu : usr=97.16%, sys=1.68%, ctx=112, majf=0, minf=49 00:26:22.441 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename1: (groupid=0, jobs=1): err= 0: pid=1596435: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=129, BW=517KiB/s (529kB/s)(5248KiB/10152msec) 00:26:22.441 slat (usec): min=20, max=159, avg=96.10, stdev=31.32 00:26:22.441 clat (msec): min=25, max=573, avg=122.96, stdev=142.21 00:26:22.441 lat (msec): min=25, max=573, avg=123.06, stdev=142.21 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 44], 80.00th=[ 330], 90.00th=[ 384], 95.00th=[ 401], 00:26:22.441 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 575], 99.95th=[ 575], 00:26:22.441 | 99.99th=[ 575] 00:26:22.441 bw ( KiB/s): min= 111, max= 1536, per=4.05%, avg=518.10, stdev=572.89, samples=20 00:26:22.441 iops : min= 27, max= 384, avg=129.45, stdev=143.28, samples=20 00:26:22.441 lat (msec) : 50=73.32%, 250=3.51%, 500=23.02%, 750=0.15% 00:26:22.441 cpu : usr=97.82%, sys=1.34%, ctx=121, majf=0, minf=56 00:26:22.441 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename1: (groupid=0, jobs=1): err= 0: pid=1596436: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=135, BW=543KiB/s (556kB/s)(5504KiB/10130msec) 00:26:22.441 slat (usec): min=9, max=155, avg=36.37, stdev=37.10 00:26:22.441 clat (msec): min=25, max=517, avg=117.46, stdev=127.19 00:26:22.441 lat (msec): min=25, max=517, avg=117.49, stdev=127.20 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 52], 80.00th=[ 243], 90.00th=[ 359], 95.00th=[ 384], 00:26:22.441 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 518], 99.95th=[ 518], 00:26:22.441 | 99.99th=[ 518] 00:26:22.441 bw ( KiB/s): min= 127, max= 1536, per=4.24%, avg=543.70, stdev=583.89, samples=20 00:26:22.441 iops : min= 31, max= 384, avg=135.85, stdev=146.03, samples=20 00:26:22.441 lat (msec) : 50=69.62%, 100=2.47%, 250=10.03%, 500=17.73%, 750=0.15% 00:26:22.441 cpu : usr=97.73%, sys=1.45%, ctx=65, majf=0, minf=33 00:26:22.441 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename1: (groupid=0, jobs=1): err= 0: pid=1596437: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=133, BW=536KiB/s (548kB/s)(5440KiB/10157msec) 00:26:22.441 slat (usec): min=7, max=157, avg=84.96, stdev=35.45 00:26:22.441 clat (msec): min=27, max=479, avg=118.75, stdev=126.25 00:26:22.441 lat (msec): min=27, max=479, avg=118.84, stdev=126.24 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 45], 80.00th=[ 247], 90.00th=[ 359], 95.00th=[ 384], 00:26:22.441 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 481], 99.95th=[ 481], 00:26:22.441 | 99.99th=[ 481] 00:26:22.441 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=537.50, stdev=556.44, samples=20 00:26:22.441 iops : min= 32, max= 384, avg=134.30, stdev=139.16, samples=20 00:26:22.441 lat (msec) : 50=70.44%, 100=0.29%, 250=9.26%, 500=20.00% 00:26:22.441 cpu : usr=98.23%, sys=1.16%, ctx=38, majf=0, minf=53 00:26:22.441 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename2: (groupid=0, jobs=1): err= 0: pid=1596438: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=129, BW=518KiB/s (530kB/s)(5248KiB/10138msec) 00:26:22.441 slat (usec): min=5, max=158, avg=92.32, stdev=19.72 00:26:22.441 clat (msec): min=25, max=581, avg=122.86, stdev=146.86 00:26:22.441 lat (msec): min=25, max=581, avg=122.96, stdev=146.85 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 44], 80.00th=[ 330], 90.00th=[ 384], 95.00th=[ 405], 00:26:22.441 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:26:22.441 | 99.99th=[ 584] 00:26:22.441 bw ( KiB/s): min= 128, max= 1536, per=4.26%, avg=545.68, stdev=596.54, samples=19 00:26:22.441 iops : min= 32, max= 384, avg=136.42, stdev=149.14, samples=19 00:26:22.441 lat (msec) : 50=72.41%, 100=3.20%, 250=1.37%, 500=21.65%, 750=1.37% 00:26:22.441 cpu : usr=98.46%, sys=1.11%, ctx=33, majf=0, minf=60 00:26:22.441 IO depths : 1=4.0%, 2=10.1%, 4=24.8%, 8=52.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename2: (groupid=0, jobs=1): err= 0: pid=1596439: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=129, BW=517KiB/s (530kB/s)(5248KiB/10143msec) 00:26:22.441 slat (usec): min=5, max=243, avg=85.47, stdev=35.45 00:26:22.441 clat (msec): min=30, max=587, avg=123.06, stdev=143.76 00:26:22.441 lat (msec): min=30, max=587, avg=123.15, stdev=143.76 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 45], 80.00th=[ 275], 90.00th=[ 376], 95.00th=[ 409], 00:26:22.441 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:26:22.441 | 99.99th=[ 592] 00:26:22.441 bw ( KiB/s): min= 128, max= 1536, per=4.26%, avg=545.53, stdev=574.61, samples=19 00:26:22.441 iops : min= 32, max= 384, avg=136.37, stdev=143.63, samples=19 00:26:22.441 lat (msec) : 50=72.87%, 100=0.15%, 250=5.34%, 500=20.12%, 750=1.52% 00:26:22.441 cpu : usr=98.51%, sys=1.00%, ctx=62, majf=0, minf=54 00:26:22.441 IO depths : 1=1.4%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename2: (groupid=0, jobs=1): err= 0: pid=1596440: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=127, BW=511KiB/s (524kB/s)(5184KiB/10135msec) 00:26:22.441 slat (usec): min=5, max=444, avg=97.29, stdev=26.23 00:26:22.441 clat (msec): min=36, max=579, avg=124.05, stdev=146.48 00:26:22.441 lat (msec): min=37, max=579, avg=124.15, stdev=146.47 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 41], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.441 | 70.00th=[ 44], 80.00th=[ 359], 90.00th=[ 384], 95.00th=[ 409], 00:26:22.441 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:26:22.441 | 99.99th=[ 584] 00:26:22.441 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=538.74, stdev=594.46, samples=19 00:26:22.441 iops : min= 32, max= 384, avg=134.68, stdev=148.61, samples=19 00:26:22.441 lat (msec) : 50=73.92%, 100=0.15%, 250=4.78%, 500=19.91%, 750=1.23% 00:26:22.441 cpu : usr=98.47%, sys=1.11%, ctx=21, majf=0, minf=53 00:26:22.441 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.441 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.441 filename2: (groupid=0, jobs=1): err= 0: pid=1596441: Thu Jul 25 10:33:10 2024 00:26:22.441 read: IOPS=129, BW=518KiB/s (531kB/s)(5248KiB/10127msec) 00:26:22.441 slat (nsec): min=6935, max=72729, avg=19739.58, stdev=11695.03 00:26:22.441 clat (msec): min=26, max=494, avg=123.31, stdev=141.04 00:26:22.441 lat (msec): min=26, max=494, avg=123.33, stdev=141.05 00:26:22.441 clat percentiles (msec): 00:26:22.441 | 1.00th=[ 42], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.441 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:26:22.441 | 70.00th=[ 45], 80.00th=[ 321], 90.00th=[ 384], 95.00th=[ 401], 00:26:22.441 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:26:22.441 | 99.99th=[ 493] 00:26:22.441 bw ( KiB/s): min= 127, max= 1536, per=4.05%, avg=518.20, stdev=571.43, samples=20 00:26:22.442 iops : min= 31, max= 384, avg=129.50, stdev=142.86, samples=20 00:26:22.442 lat (msec) : 50=73.02%, 100=0.15%, 250=3.66%, 500=23.17% 00:26:22.442 cpu : usr=98.03%, sys=1.54%, ctx=22, majf=0, minf=38 00:26:22.442 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:22.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.442 filename2: (groupid=0, jobs=1): err= 0: pid=1596442: Thu Jul 25 10:33:10 2024 00:26:22.442 read: IOPS=129, BW=517KiB/s (529kB/s)(5248KiB/10152msec) 00:26:22.442 slat (usec): min=8, max=151, avg=69.00, stdev=42.25 00:26:22.442 clat (msec): min=26, max=562, avg=122.97, stdev=142.44 00:26:22.442 lat (msec): min=26, max=562, avg=123.04, stdev=142.44 00:26:22.442 clat percentiles (msec): 00:26:22.442 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.442 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.442 | 70.00th=[ 44], 80.00th=[ 321], 90.00th=[ 384], 95.00th=[ 401], 00:26:22.442 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 558], 99.95th=[ 558], 00:26:22.442 | 99.99th=[ 558] 00:26:22.442 bw ( KiB/s): min= 111, max= 1552, per=4.05%, avg=518.10, stdev=574.98, samples=20 00:26:22.442 iops : min= 27, max= 388, avg=129.45, stdev=143.80, samples=20 00:26:22.442 lat (msec) : 50=72.87%, 100=0.46%, 250=4.12%, 500=22.10%, 750=0.46% 00:26:22.442 cpu : usr=98.49%, sys=1.05%, ctx=50, majf=0, minf=56 00:26:22.442 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:22.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.442 filename2: (groupid=0, jobs=1): err= 0: pid=1596443: Thu Jul 25 10:33:10 2024 00:26:22.442 read: IOPS=128, BW=513KiB/s (525kB/s)(5184KiB/10113msec) 00:26:22.442 slat (nsec): min=5572, max=67491, avg=19875.01, stdev=10055.38 00:26:22.442 clat (msec): min=34, max=705, avg=124.69, stdev=146.93 00:26:22.442 lat (msec): min=34, max=705, avg=124.71, stdev=146.93 00:26:22.442 clat percentiles (msec): 00:26:22.442 | 1.00th=[ 42], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.442 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.442 | 70.00th=[ 45], 80.00th=[ 355], 90.00th=[ 393], 95.00th=[ 409], 00:26:22.442 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 709], 99.95th=[ 709], 00:26:22.442 | 99.99th=[ 709] 00:26:22.442 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=538.74, stdev=592.54, samples=19 00:26:22.442 iops : min= 32, max= 384, avg=134.68, stdev=148.14, samples=19 00:26:22.442 lat (msec) : 50=73.77%, 100=0.31%, 250=5.09%, 500=19.60%, 750=1.23% 00:26:22.442 cpu : usr=97.64%, sys=1.76%, ctx=38, majf=0, minf=58 00:26:22.442 IO depths : 1=2.2%, 2=8.4%, 4=25.0%, 8=54.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:26:22.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.442 filename2: (groupid=0, jobs=1): err= 0: pid=1596444: Thu Jul 25 10:33:10 2024 00:26:22.442 read: IOPS=133, BW=534KiB/s (547kB/s)(5440KiB/10183msec) 00:26:22.442 slat (usec): min=5, max=173, avg=79.77, stdev=34.68 00:26:22.442 clat (msec): min=26, max=545, avg=119.11, stdev=130.64 00:26:22.442 lat (msec): min=26, max=545, avg=119.19, stdev=130.64 00:26:22.442 clat percentiles (msec): 00:26:22.442 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:26:22.442 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:22.442 | 70.00th=[ 46], 80.00th=[ 245], 90.00th=[ 376], 95.00th=[ 393], 00:26:22.442 | 99.00th=[ 405], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 550], 00:26:22.442 | 99.99th=[ 550] 00:26:22.442 bw ( KiB/s): min= 128, max= 1536, per=4.20%, avg=537.55, stdev=562.26, samples=20 00:26:22.442 iops : min= 32, max= 384, avg=134.35, stdev=140.58, samples=20 00:26:22.442 lat (msec) : 50=70.44%, 100=1.47%, 250=8.68%, 500=18.82%, 750=0.59% 00:26:22.442 cpu : usr=97.63%, sys=1.57%, ctx=72, majf=0, minf=46 00:26:22.442 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:22.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.442 filename2: (groupid=0, jobs=1): err= 0: pid=1596445: Thu Jul 25 10:33:10 2024 00:26:22.442 read: IOPS=137, BW=548KiB/s (562kB/s)(5568KiB/10152msec) 00:26:22.442 slat (usec): min=8, max=143, avg=35.23, stdev=36.61 00:26:22.442 clat (msec): min=25, max=415, avg=116.15, stdev=120.47 00:26:22.442 lat (msec): min=25, max=415, avg=116.18, stdev=120.48 00:26:22.442 clat percentiles (msec): 00:26:22.442 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:22.442 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:26:22.442 | 70.00th=[ 83], 80.00th=[ 232], 90.00th=[ 330], 95.00th=[ 393], 00:26:22.442 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 418], 00:26:22.442 | 99.99th=[ 418] 00:26:22.442 bw ( KiB/s): min= 128, max= 1536, per=4.30%, avg=550.15, stdev=567.35, samples=20 00:26:22.442 iops : min= 32, max= 384, avg=137.50, stdev=141.86, samples=20 00:26:22.442 lat (msec) : 50=68.97%, 100=1.15%, 250=14.37%, 500=15.52% 00:26:22.442 cpu : usr=98.39%, sys=1.17%, ctx=20, majf=0, minf=41 00:26:22.442 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:22.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.442 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.442 00:26:22.442 Run status group 0 (all jobs): 00:26:22.442 READ: bw=12.5MiB/s (13.1MB/s), 511KiB/s-777KiB/s (524kB/s-796kB/s), io=127MiB (133MB), run=10102-10184msec 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.442 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 bdev_null0 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 [2024-07-25 10:33:10.925038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 bdev_null1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.443 { 00:26:22.443 "params": { 00:26:22.443 "name": "Nvme$subsystem", 00:26:22.443 "trtype": "$TEST_TRANSPORT", 00:26:22.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.443 "adrfam": "ipv4", 00:26:22.443 "trsvcid": "$NVMF_PORT", 00:26:22.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.443 "hdgst": ${hdgst:-false}, 00:26:22.443 "ddgst": ${ddgst:-false} 00:26:22.443 }, 00:26:22.443 "method": "bdev_nvme_attach_controller" 00:26:22.443 } 00:26:22.443 EOF 00:26:22.443 )") 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.443 { 00:26:22.443 "params": { 00:26:22.443 "name": "Nvme$subsystem", 00:26:22.443 "trtype": "$TEST_TRANSPORT", 00:26:22.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.443 "adrfam": "ipv4", 00:26:22.443 "trsvcid": "$NVMF_PORT", 00:26:22.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.443 "hdgst": ${hdgst:-false}, 00:26:22.443 "ddgst": ${ddgst:-false} 00:26:22.443 }, 00:26:22.443 "method": "bdev_nvme_attach_controller" 00:26:22.443 } 00:26:22.443 EOF 00:26:22.443 )") 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:22.443 10:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.443 "params": { 00:26:22.443 "name": "Nvme0", 00:26:22.443 "trtype": "tcp", 00:26:22.443 "traddr": "10.0.0.2", 00:26:22.443 "adrfam": "ipv4", 00:26:22.443 "trsvcid": "4420", 00:26:22.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.443 "hdgst": false, 00:26:22.443 "ddgst": false 00:26:22.443 }, 00:26:22.443 "method": "bdev_nvme_attach_controller" 00:26:22.443 },{ 00:26:22.444 "params": { 00:26:22.444 "name": "Nvme1", 00:26:22.444 "trtype": "tcp", 00:26:22.444 "traddr": "10.0.0.2", 00:26:22.444 "adrfam": "ipv4", 00:26:22.444 "trsvcid": "4420", 00:26:22.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.444 "hdgst": false, 00:26:22.444 "ddgst": false 00:26:22.444 }, 00:26:22.444 "method": "bdev_nvme_attach_controller" 00:26:22.444 }' 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.444 10:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.444 10:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.444 10:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.444 10:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.444 10:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.444 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:22.444 ... 00:26:22.444 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:22.444 ... 00:26:22.444 fio-3.35 00:26:22.444 Starting 4 threads 00:26:22.444 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.713 00:26:27.713 filename0: (groupid=0, jobs=1): err= 0: pid=1598085: Thu Jul 25 10:33:16 2024 00:26:27.713 read: IOPS=1629, BW=12.7MiB/s (13.3MB/s)(63.7MiB/5005msec) 00:26:27.713 slat (nsec): min=9718, max=59444, avg=23177.41, stdev=9050.69 00:26:27.713 clat (usec): min=1106, max=8734, avg=4831.27, stdev=432.69 00:26:27.713 lat (usec): min=1132, max=8758, avg=4854.44, stdev=432.10 00:26:27.713 clat percentiles (usec): 00:26:27.713 | 1.00th=[ 3752], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 4686], 00:26:27.713 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4817], 00:26:27.713 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5342], 00:26:27.713 | 99.00th=[ 6587], 99.50th=[ 7242], 99.90th=[ 8291], 99.95th=[ 8455], 00:26:27.713 | 99.99th=[ 8717] 00:26:27.714 bw ( KiB/s): min=12512, max=13216, per=24.90%, avg=13036.80, stdev=197.81, samples=10 00:26:27.714 iops : min= 1564, max= 1652, avg=1629.60, stdev=24.73, samples=10 00:26:27.714 lat (msec) : 2=0.16%, 4=1.53%, 10=98.31% 00:26:27.714 cpu : usr=95.20%, sys=4.10%, ctx=17, majf=0, minf=0 00:26:27.714 IO depths : 1=0.3%, 2=16.3%, 4=55.1%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 issued rwts: total=8156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.714 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:27.714 filename0: (groupid=0, jobs=1): err= 0: pid=1598086: Thu Jul 25 10:33:16 2024 00:26:27.714 read: IOPS=1621, BW=12.7MiB/s (13.3MB/s)(63.4MiB/5002msec) 00:26:27.714 slat (nsec): min=7736, max=68441, avg=23353.92, stdev=12894.76 00:26:27.714 clat (usec): min=1003, max=9103, avg=4845.38, stdev=532.34 00:26:27.714 lat (usec): min=1014, max=9136, avg=4868.74, stdev=531.58 00:26:27.714 clat percentiles (usec): 00:26:27.714 | 1.00th=[ 3458], 5.00th=[ 4555], 10.00th=[ 4621], 20.00th=[ 4686], 00:26:27.714 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4817], 00:26:27.714 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5604], 00:26:27.714 | 99.00th=[ 7439], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 8848], 00:26:27.714 | 99.99th=[ 9110] 00:26:27.714 bw ( KiB/s): min=12080, max=13184, per=24.73%, avg=12949.33, stdev=343.91, samples=9 00:26:27.714 iops : min= 1510, max= 1648, avg=1618.67, stdev=42.99, samples=9 00:26:27.714 lat (msec) : 2=0.33%, 4=1.78%, 10=97.89% 00:26:27.714 cpu : usr=95.42%, sys=3.94%, ctx=15, majf=0, minf=0 00:26:27.714 IO depths : 1=0.1%, 2=17.3%, 4=54.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 issued rwts: total=8109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.714 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:27.714 filename1: (groupid=0, jobs=1): err= 0: pid=1598087: Thu Jul 25 10:33:16 2024 00:26:27.714 read: IOPS=1637, BW=12.8MiB/s (13.4MB/s)(64.0MiB/5001msec) 00:26:27.714 slat (nsec): min=7696, max=68340, avg=23462.66, stdev=12918.04 00:26:27.714 clat (usec): min=1161, max=9236, avg=4791.76, stdev=498.69 00:26:27.714 lat (usec): min=1172, max=9260, avg=4815.23, stdev=499.05 00:26:27.714 clat percentiles (usec): 00:26:27.714 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 4686], 00:26:27.714 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:26:27.714 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5211], 00:26:27.714 | 99.00th=[ 6849], 99.50th=[ 7635], 99.90th=[ 8586], 99.95th=[ 8979], 00:26:27.714 | 99.99th=[ 9241] 00:26:27.714 bw ( KiB/s): min=12720, max=13440, per=25.00%, avg=13091.56, stdev=207.97, samples=9 00:26:27.714 iops : min= 1590, max= 1680, avg=1636.44, stdev=26.00, samples=9 00:26:27.714 lat (msec) : 2=0.27%, 4=2.34%, 10=97.39% 00:26:27.714 cpu : usr=94.50%, sys=4.80%, ctx=16, majf=0, minf=9 00:26:27.714 IO depths : 1=0.1%, 2=21.9%, 4=52.2%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 issued rwts: total=8188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.714 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:27.714 filename1: (groupid=0, jobs=1): err= 0: pid=1598088: Thu Jul 25 10:33:16 2024 00:26:27.714 read: IOPS=1660, BW=13.0MiB/s (13.6MB/s)(64.9MiB/5004msec) 00:26:27.714 slat (nsec): min=7613, max=68376, avg=15602.92, stdev=10547.33 00:26:27.714 clat (usec): min=1716, max=7926, avg=4773.76, stdev=367.65 00:26:27.714 lat (usec): min=1738, max=7959, avg=4789.36, stdev=368.42 00:26:27.714 clat percentiles (usec): 00:26:27.714 | 1.00th=[ 3326], 5.00th=[ 4080], 10.00th=[ 4490], 20.00th=[ 4752], 00:26:27.714 | 30.00th=[ 4817], 40.00th=[ 4817], 50.00th=[ 4817], 60.00th=[ 4817], 00:26:27.714 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5080], 00:26:27.714 | 99.00th=[ 5604], 99.50th=[ 6063], 99.90th=[ 7570], 99.95th=[ 7898], 00:26:27.714 | 99.99th=[ 7898] 00:26:27.714 bw ( KiB/s): min=13050, max=14448, per=25.36%, avg=13281.00, stdev=417.96, samples=10 00:26:27.714 iops : min= 1631, max= 1806, avg=1660.10, stdev=52.26, samples=10 00:26:27.714 lat (msec) : 2=0.10%, 4=4.20%, 10=95.70% 00:26:27.714 cpu : usr=95.72%, sys=3.82%, ctx=9, majf=0, minf=0 00:26:27.714 IO depths : 1=0.1%, 2=6.4%, 4=67.2%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.714 issued rwts: total=8307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.714 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:27.714 00:26:27.714 Run status group 0 (all jobs): 00:26:27.714 READ: bw=51.1MiB/s (53.6MB/s), 12.7MiB/s-13.0MiB/s (13.3MB/s-13.6MB/s), io=256MiB (268MB), run=5001-5005msec 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 00:26:27.714 real 0m24.161s 00:26:27.714 user 4m34.892s 00:26:27.714 sys 0m6.056s 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 ************************************ 00:26:27.714 END TEST fio_dif_rand_params 00:26:27.714 ************************************ 00:26:27.714 10:33:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:27.714 10:33:17 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:27.714 10:33:17 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 ************************************ 00:26:27.714 START TEST fio_dif_digest 00:26:27.714 ************************************ 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 bdev_null0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.714 [2024-07-25 10:33:17.245575] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.714 { 00:26:27.714 "params": { 00:26:27.714 "name": "Nvme$subsystem", 00:26:27.714 "trtype": "$TEST_TRANSPORT", 00:26:27.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.714 "adrfam": "ipv4", 00:26:27.714 "trsvcid": "$NVMF_PORT", 00:26:27.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.714 "hdgst": ${hdgst:-false}, 00:26:27.714 "ddgst": ${ddgst:-false} 00:26:27.714 }, 00:26:27.714 "method": "bdev_nvme_attach_controller" 00:26:27.714 } 00:26:27.714 EOF 00:26:27.714 )") 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.714 "params": { 00:26:27.714 "name": "Nvme0", 00:26:27.714 "trtype": "tcp", 00:26:27.714 "traddr": "10.0.0.2", 00:26:27.714 "adrfam": "ipv4", 00:26:27.714 "trsvcid": "4420", 00:26:27.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.714 "hdgst": true, 00:26:27.714 "ddgst": true 00:26:27.714 }, 00:26:27.714 "method": "bdev_nvme_attach_controller" 00:26:27.714 }' 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:27.714 10:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.971 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:27.971 ... 00:26:27.971 fio-3.35 00:26:27.971 Starting 3 threads 00:26:27.971 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.162 00:26:40.162 filename0: (groupid=0, jobs=1): err= 0: pid=1598669: Thu Jul 25 10:33:28 2024 00:26:40.162 read: IOPS=182, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:26:40.162 slat (nsec): min=7878, max=64929, avg=21747.98, stdev=8342.89 00:26:40.162 clat (usec): min=12032, max=54409, avg=16359.33, stdev=1591.67 00:26:40.162 lat (usec): min=12057, max=54437, avg=16381.08, stdev=1590.58 00:26:40.162 clat percentiles (usec): 00:26:40.162 | 1.00th=[13829], 5.00th=[14484], 10.00th=[15008], 20.00th=[15401], 00:26:40.162 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:26:40.162 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:26:40.162 | 99.00th=[19006], 99.50th=[19268], 99.90th=[49546], 99.95th=[54264], 00:26:40.162 | 99.99th=[54264] 00:26:40.162 bw ( KiB/s): min=22528, max=24320, per=35.08%, avg=23475.20, stdev=477.85, samples=20 00:26:40.162 iops : min= 176, max= 190, avg=183.40, stdev= 3.73, samples=20 00:26:40.162 lat (msec) : 20=99.89%, 50=0.05%, 100=0.05% 00:26:40.162 cpu : usr=93.61%, sys=5.81%, ctx=66, majf=0, minf=139 00:26:40.162 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.162 filename0: (groupid=0, jobs=1): err= 0: pid=1598670: Thu Jul 25 10:33:28 2024 00:26:40.162 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(209MiB/10047msec) 00:26:40.162 slat (nsec): min=7958, max=43809, avg=14115.05, stdev=3048.86 00:26:40.162 clat (usec): min=14021, max=54577, avg=17993.04, stdev=1799.39 00:26:40.162 lat (usec): min=14038, max=54589, avg=18007.16, stdev=1799.71 00:26:40.162 clat percentiles (usec): 00:26:40.162 | 1.00th=[14877], 5.00th=[15795], 10.00th=[16319], 20.00th=[16909], 00:26:40.162 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:26:40.162 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19792], 95.00th=[20317], 00:26:40.162 | 99.00th=[21103], 99.50th=[21365], 99.90th=[51119], 99.95th=[54789], 00:26:40.162 | 99.99th=[54789] 00:26:40.162 bw ( KiB/s): min=20480, max=22784, per=31.91%, avg=21352.50, stdev=793.83, samples=20 00:26:40.162 iops : min= 160, max= 178, avg=166.80, stdev= 6.20, samples=20 00:26:40.162 lat (msec) : 20=93.54%, 50=6.34%, 100=0.12% 00:26:40.162 cpu : usr=93.92%, sys=5.67%, ctx=22, majf=0, minf=105 00:26:40.162 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.162 filename0: (groupid=0, jobs=1): err= 0: pid=1598671: Thu Jul 25 10:33:28 2024 00:26:40.162 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(218MiB/10048msec) 00:26:40.162 slat (nsec): min=7943, max=64077, avg=15982.41, stdev=5522.96 00:26:40.162 clat (usec): min=12892, max=58470, avg=17227.64, stdev=1751.40 00:26:40.162 lat (usec): min=12904, max=58489, avg=17243.62, stdev=1751.15 00:26:40.162 clat percentiles (usec): 00:26:40.162 | 1.00th=[14484], 5.00th=[15270], 10.00th=[15664], 20.00th=[16188], 00:26:40.162 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:26:40.162 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:26:40.162 | 99.00th=[20055], 99.50th=[20579], 99.90th=[51643], 99.95th=[58459], 00:26:40.162 | 99.99th=[58459] 00:26:40.162 bw ( KiB/s): min=20736, max=23040, per=33.34%, avg=22310.40, stdev=588.92, samples=20 00:26:40.162 iops : min= 162, max= 180, avg=174.30, stdev= 4.60, samples=20 00:26:40.162 lat (msec) : 20=98.68%, 50=1.20%, 100=0.11% 00:26:40.162 cpu : usr=94.02%, sys=5.54%, ctx=25, majf=0, minf=164 00:26:40.162 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.162 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:40.162 00:26:40.162 Run status group 0 (all jobs): 00:26:40.162 READ: bw=65.3MiB/s (68.5MB/s), 20.8MiB/s-22.9MiB/s (21.8MB/s-24.0MB/s), io=657MiB (689MB), run=10047-10048msec 00:26:40.162 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:40.162 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:40.162 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.162 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.163 00:26:40.163 real 0m11.203s 00:26:40.163 user 0m29.263s 00:26:40.163 sys 0m1.972s 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.163 10:33:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.163 ************************************ 00:26:40.163 END TEST fio_dif_digest 00:26:40.163 ************************************ 00:26:40.163 10:33:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:40.163 10:33:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.163 rmmod nvme_tcp 00:26:40.163 rmmod nvme_fabrics 00:26:40.163 rmmod nvme_keyring 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1593334 ']' 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1593334 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1593334 ']' 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1593334 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593334 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593334' 00:26:40.163 killing process with pid 1593334 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1593334 00:26:40.163 10:33:28 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1593334 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:40.163 10:33:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:40.163 Waiting for block devices as requested 00:26:40.163 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:40.163 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:40.163 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:40.422 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:40.422 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:40.422 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:40.422 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:40.682 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:40.682 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:40.682 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:40.682 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:40.942 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:40.942 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:40.942 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:41.201 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:41.201 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:41.201 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:41.201 10:33:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.201 10:33:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.201 10:33:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.201 10:33:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.201 10:33:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.201 10:33:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:41.201 10:33:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.735 10:33:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.735 00:26:43.735 real 1m5.870s 00:26:43.735 user 6m30.585s 00:26:43.735 sys 0m16.878s 00:26:43.735 10:33:32 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:43.735 10:33:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:43.735 ************************************ 00:26:43.735 END TEST nvmf_dif 00:26:43.735 ************************************ 00:26:43.735 10:33:33 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:43.735 10:33:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:43.735 10:33:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.735 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:26:43.735 ************************************ 00:26:43.735 START TEST nvmf_abort_qd_sizes 00:26:43.735 ************************************ 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:43.735 * Looking for test storage... 00:26:43.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.735 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:45.112 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:45.112 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.112 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:45.113 Found net devices under 0000:08:00.0: cvl_0_0 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:45.113 Found net devices under 0000:08:00.1: cvl_0_1 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:26:45.113 00:26:45.113 --- 10.0.0.2 ping statistics --- 00:26:45.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.113 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:26:45.113 00:26:45.113 --- 10.0.0.1 ping statistics --- 00:26:45.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.113 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:45.113 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:46.049 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:46.049 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:46.049 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:46.986 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.986 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1602402 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1602402 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1602402 ']' 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.244 10:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.244 [2024-07-25 10:33:36.814285] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:26:47.245 [2024-07-25 10:33:36.814386] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.245 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.245 [2024-07-25 10:33:36.882845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.245 [2024-07-25 10:33:37.001357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.245 [2024-07-25 10:33:37.001415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.245 [2024-07-25 10:33:37.001432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.245 [2024-07-25 10:33:37.001446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.245 [2024-07-25 10:33:37.001458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.245 [2024-07-25 10:33:37.001519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.245 [2024-07-25 10:33:37.001576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.245 [2024-07-25 10:33:37.001628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.245 [2024-07-25 10:33:37.001634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.503 10:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.503 ************************************ 00:26:47.503 START TEST spdk_target_abort 00:26:47.503 ************************************ 00:26:47.503 10:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:26:47.503 10:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:47.503 10:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:26:47.503 10:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.503 10:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.783 spdk_targetn1 00:26:50.783 10:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.783 10:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.783 10:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.783 10:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.783 [2024-07-25 10:33:39.994303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.783 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.783 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:50.783 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.784 [2024-07-25 10:33:40.026933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:50.784 10:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:50.784 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.101 Initializing NVMe Controllers 00:26:54.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:54.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:54.101 Initialization complete. Launching workers. 00:26:54.101 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10851, failed: 0 00:26:54.101 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1179, failed to submit 9672 00:26:54.101 success 729, unsuccess 450, failed 0 00:26:54.101 10:33:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:54.101 10:33:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:54.101 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.390 Initializing NVMe Controllers 00:26:57.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:57.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:57.390 Initialization complete. Launching workers. 00:26:57.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8718, failed: 0 00:26:57.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1207, failed to submit 7511 00:26:57.390 success 334, unsuccess 873, failed 0 00:26:57.390 10:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:57.390 10:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:57.390 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.671 Initializing NVMe Controllers 00:27:00.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:00.671 Initialization complete. Launching workers. 00:27:00.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28824, failed: 0 00:27:00.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2576, failed to submit 26248 00:27:00.671 success 432, unsuccess 2144, failed 0 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.671 10:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1602402 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1602402 ']' 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1602402 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602402 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602402' 00:27:01.606 killing process with pid 1602402 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1602402 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1602402 00:27:01.606 00:27:01.606 real 0m14.150s 00:27:01.606 user 0m53.816s 00:27:01.606 sys 0m2.260s 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 ************************************ 00:27:01.606 END TEST spdk_target_abort 00:27:01.606 ************************************ 00:27:01.606 10:33:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:01.606 10:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:01.606 10:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:01.606 10:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 ************************************ 00:27:01.606 START TEST kernel_target_abort 00:27:01.606 ************************************ 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:01.606 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:01.867 10:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:02.805 Waiting for block devices as requested 00:27:02.805 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:02.805 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:02.805 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:03.063 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:03.063 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:03.063 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:03.063 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:03.323 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:03.323 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:03.323 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:03.583 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:03.583 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:03.583 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:03.583 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:03.842 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:03.842 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:03.842 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:04.101 No valid GPT data, bailing 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:27:04.101 00:27:04.101 Discovery Log Number of Records 2, Generation counter 2 00:27:04.101 =====Discovery Log Entry 0====== 00:27:04.101 trtype: tcp 00:27:04.101 adrfam: ipv4 00:27:04.101 subtype: current discovery subsystem 00:27:04.101 treq: not specified, sq flow control disable supported 00:27:04.101 portid: 1 00:27:04.101 trsvcid: 4420 00:27:04.101 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:04.101 traddr: 10.0.0.1 00:27:04.101 eflags: none 00:27:04.101 sectype: none 00:27:04.101 =====Discovery Log Entry 1====== 00:27:04.101 trtype: tcp 00:27:04.101 adrfam: ipv4 00:27:04.101 subtype: nvme subsystem 00:27:04.101 treq: not specified, sq flow control disable supported 00:27:04.101 portid: 1 00:27:04.101 trsvcid: 4420 00:27:04.101 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:04.101 traddr: 10.0.0.1 00:27:04.101 eflags: none 00:27:04.101 sectype: none 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:04.101 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.102 10:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:04.102 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.383 Initializing NVMe Controllers 00:27:07.383 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:07.383 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:07.383 Initialization complete. Launching workers. 00:27:07.383 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38892, failed: 0 00:27:07.383 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38892, failed to submit 0 00:27:07.383 success 0, unsuccess 38892, failed 0 00:27:07.383 10:33:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:07.383 10:33:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:07.383 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.661 Initializing NVMe Controllers 00:27:10.661 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:10.661 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:10.661 Initialization complete. Launching workers. 00:27:10.661 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70136, failed: 0 00:27:10.661 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17702, failed to submit 52434 00:27:10.661 success 0, unsuccess 17702, failed 0 00:27:10.661 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:10.661 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.661 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.938 Initializing NVMe Controllers 00:27:13.938 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:13.938 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:13.938 Initialization complete. Launching workers. 00:27:13.938 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69219, failed: 0 00:27:13.938 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17306, failed to submit 51913 00:27:13.938 success 0, unsuccess 17306, failed 0 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:13.938 10:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:14.504 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:14.504 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:14.504 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:14.504 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:14.504 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:14.504 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:14.763 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:14.763 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:14.763 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:14.763 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:15.704 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:27:15.704 00:27:15.704 real 0m13.883s 00:27:15.704 user 0m6.128s 00:27:15.704 sys 0m3.009s 00:27:15.704 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.704 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.704 ************************************ 00:27:15.704 END TEST kernel_target_abort 00:27:15.704 ************************************ 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.704 rmmod nvme_tcp 00:27:15.704 rmmod nvme_fabrics 00:27:15.704 rmmod nvme_keyring 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1602402 ']' 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1602402 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1602402 ']' 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1602402 00:27:15.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1602402) - No such process 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1602402 is not found' 00:27:15.704 Process with pid 1602402 is not found 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:15.704 10:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:16.643 Waiting for block devices as requested 00:27:16.643 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:16.903 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:16.903 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:16.903 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:16.903 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:17.164 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:17.164 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:17.164 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:17.164 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:17.425 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:17.425 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:17.425 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:17.425 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:17.685 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:17.685 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:17.685 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:17.946 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:17.946 10:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.852 10:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.852 00:27:19.852 real 0m36.532s 00:27:19.852 user 1m1.764s 00:27:19.852 sys 0m8.164s 00:27:19.852 10:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.852 10:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:19.852 ************************************ 00:27:19.852 END TEST nvmf_abort_qd_sizes 00:27:19.852 ************************************ 00:27:19.852 10:34:09 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:19.852 10:34:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:19.852 10:34:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.852 10:34:09 -- common/autotest_common.sh@10 -- # set +x 00:27:19.852 ************************************ 00:27:19.852 START TEST keyring_file 00:27:19.852 ************************************ 00:27:19.852 10:34:09 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:20.111 * Looking for test storage... 00:27:20.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.111 10:34:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.111 10:34:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.111 10:34:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.111 10:34:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.111 10:34:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.111 10:34:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.111 10:34:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:20.111 10:34:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.111 10:34:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:20.111 10:34:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:20.111 10:34:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g0yrXYMC6l 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g0yrXYMC6l 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g0yrXYMC6l 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.g0yrXYMC6l 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.o3SpixC4Sl 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:20.112 10:34:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o3SpixC4Sl 00:27:20.112 10:34:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.o3SpixC4Sl 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.o3SpixC4Sl 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=1606908 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:20.112 10:34:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1606908 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1606908 ']' 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.112 10:34:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.112 [2024-07-25 10:34:09.845734] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:27:20.112 [2024-07-25 10:34:09.845830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606908 ] 00:27:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.371 [2024-07-25 10:34:09.907328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.371 [2024-07-25 10:34:10.025178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:20.629 10:34:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.629 [2024-07-25 10:34:10.250270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.629 null0 00:27:20.629 [2024-07-25 10:34:10.282338] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:20.629 [2024-07-25 10:34:10.282739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:20.629 [2024-07-25 10:34:10.290340] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.629 10:34:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.629 [2024-07-25 10:34:10.302366] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:20.629 request: 00:27:20.629 { 00:27:20.629 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:20.629 "secure_channel": false, 00:27:20.629 "listen_address": { 00:27:20.629 "trtype": "tcp", 00:27:20.629 "traddr": "127.0.0.1", 00:27:20.629 "trsvcid": "4420" 00:27:20.629 }, 00:27:20.629 "method": "nvmf_subsystem_add_listener", 00:27:20.629 "req_id": 1 00:27:20.629 } 00:27:20.629 Got JSON-RPC error response 00:27:20.629 response: 00:27:20.629 { 00:27:20.629 "code": -32602, 00:27:20.629 "message": "Invalid parameters" 00:27:20.629 } 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:20.629 10:34:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=1606918 00:27:20.629 10:34:10 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:20.629 10:34:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1606918 /var/tmp/bperf.sock 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1606918 ']' 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.629 10:34:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.629 [2024-07-25 10:34:10.354749] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:27:20.630 [2024-07-25 10:34:10.354846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606918 ] 00:27:20.630 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.888 [2024-07-25 10:34:10.416403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.888 [2024-07-25 10:34:10.536177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.888 10:34:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.888 10:34:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:20.888 10:34:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:20.888 10:34:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:21.455 10:34:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o3SpixC4Sl 00:27:21.455 10:34:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o3SpixC4Sl 00:27:21.712 10:34:11 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:21.712 10:34:11 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:21.712 10:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.712 10:34:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.712 10:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:21.971 10:34:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.g0yrXYMC6l == \/\t\m\p\/\t\m\p\.\g\0\y\r\X\Y\M\C\6\l ]] 00:27:21.971 10:34:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:21.971 10:34:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:21.971 10:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.971 10:34:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.971 10:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:22.229 10:34:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.o3SpixC4Sl == \/\t\m\p\/\t\m\p\.\o\3\S\p\i\x\C\4\S\l ]] 00:27:22.229 10:34:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:22.229 10:34:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:22.229 10:34:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.229 10:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.229 10:34:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.229 10:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:22.487 10:34:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:22.487 10:34:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:22.487 10:34:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:22.487 10:34:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.487 10:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.487 10:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:22.487 10:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.745 10:34:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:22.745 10:34:12 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:22.745 10:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:23.003 [2024-07-25 10:34:12.528941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:23.003 nvme0n1 00:27:23.003 10:34:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:23.003 10:34:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:23.003 10:34:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:23.003 10:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:23.003 10:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.003 10:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:23.261 10:34:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:23.261 10:34:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:23.261 10:34:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:23.261 10:34:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:23.261 10:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:23.261 10:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.261 10:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:23.520 10:34:13 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:23.520 10:34:13 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:23.520 Running I/O for 1 seconds... 00:27:24.457 00:27:24.457 Latency(us) 00:27:24.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.457 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:24.457 nvme0n1 : 1.01 7171.90 28.02 0.00 0.00 17754.06 4417.61 25437.68 00:27:24.457 =================================================================================================================== 00:27:24.457 Total : 7171.90 28.02 0.00 0.00 17754.06 4417.61 25437.68 00:27:24.457 0 00:27:24.715 10:34:14 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:24.715 10:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:24.974 10:34:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:24.974 10:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:24.974 10:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:24.974 10:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:24.974 10:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:24.974 10:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:25.232 10:34:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:25.232 10:34:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:25.232 10:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:25.232 10:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:25.232 10:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:25.232 10:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:25.232 10:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:25.490 10:34:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:25.490 10:34:15 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.490 10:34:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:25.490 10:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:25.748 [2024-07-25 10:34:15.376426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:25.748 [2024-07-25 10:34:15.377067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e520 (107): Transport endpoint is not connected 00:27:25.748 [2024-07-25 10:34:15.378058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e520 (9): Bad file descriptor 00:27:25.748 [2024-07-25 10:34:15.379065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:25.748 [2024-07-25 10:34:15.379086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:25.748 [2024-07-25 10:34:15.379102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:25.748 request: 00:27:25.748 { 00:27:25.748 "name": "nvme0", 00:27:25.748 "trtype": "tcp", 00:27:25.748 "traddr": "127.0.0.1", 00:27:25.748 "adrfam": "ipv4", 00:27:25.748 "trsvcid": "4420", 00:27:25.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.748 "prchk_reftag": false, 00:27:25.748 "prchk_guard": false, 00:27:25.748 "hdgst": false, 00:27:25.748 "ddgst": false, 00:27:25.748 "psk": "key1", 00:27:25.749 "method": "bdev_nvme_attach_controller", 00:27:25.749 "req_id": 1 00:27:25.749 } 00:27:25.749 Got JSON-RPC error response 00:27:25.749 response: 00:27:25.749 { 00:27:25.749 "code": -5, 00:27:25.749 "message": "Input/output error" 00:27:25.749 } 00:27:25.749 10:34:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:25.749 10:34:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:25.749 10:34:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:25.749 10:34:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:25.749 10:34:15 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:25.749 10:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:25.749 10:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:25.749 10:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:25.749 10:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:25.749 10:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:26.075 10:34:15 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:26.075 10:34:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:26.075 10:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:26.075 10:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:26.075 10:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.075 10:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.075 10:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:26.349 10:34:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:26.349 10:34:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:26.349 10:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:26.607 10:34:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:26.607 10:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:26.864 10:34:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:26.864 10:34:16 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:26.864 10:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.122 10:34:16 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:27.122 10:34:16 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.g0yrXYMC6l 00:27:27.122 10:34:16 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.122 10:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.122 [2024-07-25 10:34:16.867944] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.g0yrXYMC6l': 0100660 00:27:27.122 [2024-07-25 10:34:16.867999] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:27.122 request: 00:27:27.122 { 00:27:27.122 "name": "key0", 00:27:27.122 "path": "/tmp/tmp.g0yrXYMC6l", 00:27:27.122 "method": "keyring_file_add_key", 00:27:27.122 "req_id": 1 00:27:27.122 } 00:27:27.122 Got JSON-RPC error response 00:27:27.122 response: 00:27:27.122 { 00:27:27.122 "code": -1, 00:27:27.122 "message": "Operation not permitted" 00:27:27.122 } 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.122 10:34:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.122 10:34:16 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.g0yrXYMC6l 00:27:27.123 10:34:16 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.123 10:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g0yrXYMC6l 00:27:27.380 10:34:17 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.g0yrXYMC6l 00:27:27.380 10:34:17 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:27.380 10:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:27.380 10:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:27.380 10:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:27.381 10:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.381 10:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:27.639 10:34:17 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:27.639 10:34:17 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.639 10:34:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.639 10:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.897 [2024-07-25 10:34:17.609932] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.g0yrXYMC6l': No such file or directory 00:27:27.897 [2024-07-25 10:34:17.609975] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:27.897 [2024-07-25 10:34:17.610010] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:27.897 [2024-07-25 10:34:17.610024] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:27.897 [2024-07-25 10:34:17.610038] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:27.897 request: 00:27:27.897 { 00:27:27.897 "name": "nvme0", 00:27:27.897 "trtype": "tcp", 00:27:27.897 "traddr": "127.0.0.1", 00:27:27.897 "adrfam": "ipv4", 00:27:27.897 "trsvcid": "4420", 00:27:27.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:27.897 "prchk_reftag": false, 00:27:27.897 "prchk_guard": false, 00:27:27.897 "hdgst": false, 00:27:27.897 "ddgst": false, 00:27:27.897 "psk": "key0", 00:27:27.897 "method": "bdev_nvme_attach_controller", 00:27:27.897 "req_id": 1 00:27:27.897 } 00:27:27.897 Got JSON-RPC error response 00:27:27.897 response: 00:27:27.897 { 00:27:27.897 "code": -19, 00:27:27.897 "message": "No such device" 00:27:27.897 } 00:27:27.897 10:34:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:27.897 10:34:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.897 10:34:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.897 10:34:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.897 10:34:17 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:27.897 10:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:28.154 10:34:17 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d93zobe1Q1 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:28.154 10:34:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:28.154 10:34:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d93zobe1Q1 00:27:28.411 10:34:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d93zobe1Q1 00:27:28.411 10:34:17 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.d93zobe1Q1 00:27:28.411 10:34:17 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d93zobe1Q1 00:27:28.411 10:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d93zobe1Q1 00:27:28.411 10:34:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:28.411 10:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:28.975 nvme0n1 00:27:28.975 10:34:18 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:28.975 10:34:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:28.975 10:34:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:28.975 10:34:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.975 10:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.975 10:34:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.233 10:34:18 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:29.233 10:34:18 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:29.233 10:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:29.490 10:34:19 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:29.490 10:34:19 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:29.490 10:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.490 10:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.490 10:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.748 10:34:19 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:29.748 10:34:19 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:29.748 10:34:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.748 10:34:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.748 10:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.748 10:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.749 10:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.006 10:34:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:30.006 10:34:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:30.006 10:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:30.263 10:34:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:30.263 10:34:19 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:30.263 10:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.522 10:34:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:30.522 10:34:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d93zobe1Q1 00:27:30.522 10:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d93zobe1Q1 00:27:31.088 10:34:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o3SpixC4Sl 00:27:31.088 10:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o3SpixC4Sl 00:27:31.088 10:34:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.088 10:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.345 nvme0n1 00:27:31.602 10:34:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:31.602 10:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:31.859 10:34:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:31.859 "subsystems": [ 00:27:31.859 { 00:27:31.859 "subsystem": "keyring", 00:27:31.859 "config": [ 00:27:31.859 { 00:27:31.859 "method": "keyring_file_add_key", 00:27:31.859 "params": { 00:27:31.859 "name": "key0", 00:27:31.859 "path": "/tmp/tmp.d93zobe1Q1" 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "keyring_file_add_key", 00:27:31.859 "params": { 00:27:31.859 "name": "key1", 00:27:31.859 "path": "/tmp/tmp.o3SpixC4Sl" 00:27:31.859 } 00:27:31.859 } 00:27:31.859 ] 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "subsystem": "iobuf", 00:27:31.859 "config": [ 00:27:31.859 { 00:27:31.859 "method": "iobuf_set_options", 00:27:31.859 "params": { 00:27:31.859 "small_pool_count": 8192, 00:27:31.859 "large_pool_count": 1024, 00:27:31.859 "small_bufsize": 8192, 00:27:31.859 "large_bufsize": 135168 00:27:31.859 } 00:27:31.859 } 00:27:31.859 ] 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "subsystem": "sock", 00:27:31.859 "config": [ 00:27:31.859 { 00:27:31.859 "method": "sock_set_default_impl", 00:27:31.859 "params": { 00:27:31.859 "impl_name": "posix" 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "sock_impl_set_options", 00:27:31.859 "params": { 00:27:31.859 "impl_name": "ssl", 00:27:31.859 "recv_buf_size": 4096, 00:27:31.859 "send_buf_size": 4096, 00:27:31.859 "enable_recv_pipe": true, 00:27:31.859 "enable_quickack": false, 00:27:31.859 "enable_placement_id": 0, 00:27:31.859 "enable_zerocopy_send_server": true, 00:27:31.859 "enable_zerocopy_send_client": false, 00:27:31.859 "zerocopy_threshold": 0, 00:27:31.859 "tls_version": 0, 00:27:31.859 "enable_ktls": false 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "sock_impl_set_options", 00:27:31.859 "params": { 00:27:31.859 "impl_name": "posix", 00:27:31.859 "recv_buf_size": 2097152, 00:27:31.859 "send_buf_size": 2097152, 00:27:31.859 "enable_recv_pipe": true, 00:27:31.859 "enable_quickack": false, 00:27:31.859 "enable_placement_id": 0, 00:27:31.859 "enable_zerocopy_send_server": true, 00:27:31.859 "enable_zerocopy_send_client": false, 00:27:31.859 "zerocopy_threshold": 0, 00:27:31.859 "tls_version": 0, 00:27:31.859 "enable_ktls": false 00:27:31.859 } 00:27:31.859 } 00:27:31.859 ] 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "subsystem": "vmd", 00:27:31.859 "config": [] 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "subsystem": "accel", 00:27:31.859 "config": [ 00:27:31.859 { 00:27:31.859 "method": "accel_set_options", 00:27:31.859 "params": { 00:27:31.859 "small_cache_size": 128, 00:27:31.859 "large_cache_size": 16, 00:27:31.859 "task_count": 2048, 00:27:31.859 "sequence_count": 2048, 00:27:31.859 "buf_count": 2048 00:27:31.859 } 00:27:31.859 } 00:27:31.859 ] 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "subsystem": "bdev", 00:27:31.859 "config": [ 00:27:31.859 { 00:27:31.859 "method": "bdev_set_options", 00:27:31.859 "params": { 00:27:31.859 "bdev_io_pool_size": 65535, 00:27:31.859 "bdev_io_cache_size": 256, 00:27:31.859 "bdev_auto_examine": true, 00:27:31.859 "iobuf_small_cache_size": 128, 00:27:31.859 "iobuf_large_cache_size": 16 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "bdev_raid_set_options", 00:27:31.859 "params": { 00:27:31.859 "process_window_size_kb": 1024, 00:27:31.859 "process_max_bandwidth_mb_sec": 0 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "bdev_iscsi_set_options", 00:27:31.859 "params": { 00:27:31.859 "timeout_sec": 30 00:27:31.859 } 00:27:31.859 }, 00:27:31.859 { 00:27:31.859 "method": "bdev_nvme_set_options", 00:27:31.859 "params": { 00:27:31.859 "action_on_timeout": "none", 00:27:31.859 "timeout_us": 0, 00:27:31.859 "timeout_admin_us": 0, 00:27:31.860 "keep_alive_timeout_ms": 10000, 00:27:31.860 "arbitration_burst": 0, 00:27:31.860 "low_priority_weight": 0, 00:27:31.860 "medium_priority_weight": 0, 00:27:31.860 "high_priority_weight": 0, 00:27:31.860 "nvme_adminq_poll_period_us": 10000, 00:27:31.860 "nvme_ioq_poll_period_us": 0, 00:27:31.860 "io_queue_requests": 512, 00:27:31.860 "delay_cmd_submit": true, 00:27:31.860 "transport_retry_count": 4, 00:27:31.860 "bdev_retry_count": 3, 00:27:31.860 "transport_ack_timeout": 0, 00:27:31.860 "ctrlr_loss_timeout_sec": 0, 00:27:31.860 "reconnect_delay_sec": 0, 00:27:31.860 "fast_io_fail_timeout_sec": 0, 00:27:31.860 "disable_auto_failback": false, 00:27:31.860 "generate_uuids": false, 00:27:31.860 "transport_tos": 0, 00:27:31.860 "nvme_error_stat": false, 00:27:31.860 "rdma_srq_size": 0, 00:27:31.860 "io_path_stat": false, 00:27:31.860 "allow_accel_sequence": false, 00:27:31.860 "rdma_max_cq_size": 0, 00:27:31.860 "rdma_cm_event_timeout_ms": 0, 00:27:31.860 "dhchap_digests": [ 00:27:31.860 "sha256", 00:27:31.860 "sha384", 00:27:31.860 "sha512" 00:27:31.860 ], 00:27:31.860 "dhchap_dhgroups": [ 00:27:31.860 "null", 00:27:31.860 "ffdhe2048", 00:27:31.860 "ffdhe3072", 00:27:31.860 "ffdhe4096", 00:27:31.860 "ffdhe6144", 00:27:31.860 "ffdhe8192" 00:27:31.860 ] 00:27:31.860 } 00:27:31.860 }, 00:27:31.860 { 00:27:31.860 "method": "bdev_nvme_attach_controller", 00:27:31.860 "params": { 00:27:31.860 "name": "nvme0", 00:27:31.860 "trtype": "TCP", 00:27:31.860 "adrfam": "IPv4", 00:27:31.860 "traddr": "127.0.0.1", 00:27:31.860 "trsvcid": "4420", 00:27:31.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.860 "prchk_reftag": false, 00:27:31.860 "prchk_guard": false, 00:27:31.860 "ctrlr_loss_timeout_sec": 0, 00:27:31.860 "reconnect_delay_sec": 0, 00:27:31.860 "fast_io_fail_timeout_sec": 0, 00:27:31.860 "psk": "key0", 00:27:31.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.860 "hdgst": false, 00:27:31.860 "ddgst": false 00:27:31.860 } 00:27:31.860 }, 00:27:31.860 { 00:27:31.860 "method": "bdev_nvme_set_hotplug", 00:27:31.860 "params": { 00:27:31.860 "period_us": 100000, 00:27:31.860 "enable": false 00:27:31.860 } 00:27:31.860 }, 00:27:31.860 { 00:27:31.860 "method": "bdev_wait_for_examine" 00:27:31.860 } 00:27:31.860 ] 00:27:31.860 }, 00:27:31.860 { 00:27:31.860 "subsystem": "nbd", 00:27:31.860 "config": [] 00:27:31.860 } 00:27:31.860 ] 00:27:31.860 }' 00:27:31.860 10:34:21 keyring_file -- keyring/file.sh@114 -- # killprocess 1606918 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1606918 ']' 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1606918 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606918 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606918' 00:27:31.860 killing process with pid 1606918 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@969 -- # kill 1606918 00:27:31.860 Received shutdown signal, test time was about 1.000000 seconds 00:27:31.860 00:27:31.860 Latency(us) 00:27:31.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.860 =================================================================================================================== 00:27:31.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.860 10:34:21 keyring_file -- common/autotest_common.sh@974 -- # wait 1606918 00:27:32.118 10:34:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=1608070 00:27:32.118 10:34:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1608070 /var/tmp/bperf.sock 00:27:32.118 10:34:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1608070 ']' 00:27:32.118 10:34:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.118 10:34:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:32.118 "subsystems": [ 00:27:32.118 { 00:27:32.118 "subsystem": "keyring", 00:27:32.118 "config": [ 00:27:32.118 { 00:27:32.118 "method": "keyring_file_add_key", 00:27:32.118 "params": { 00:27:32.118 "name": "key0", 00:27:32.118 "path": "/tmp/tmp.d93zobe1Q1" 00:27:32.118 } 00:27:32.118 }, 00:27:32.118 { 00:27:32.118 "method": "keyring_file_add_key", 00:27:32.118 "params": { 00:27:32.118 "name": "key1", 00:27:32.118 "path": "/tmp/tmp.o3SpixC4Sl" 00:27:32.118 } 00:27:32.118 } 00:27:32.118 ] 00:27:32.118 }, 00:27:32.118 { 00:27:32.118 "subsystem": "iobuf", 00:27:32.118 "config": [ 00:27:32.118 { 00:27:32.118 "method": "iobuf_set_options", 00:27:32.118 "params": { 00:27:32.118 "small_pool_count": 8192, 00:27:32.118 "large_pool_count": 1024, 00:27:32.118 "small_bufsize": 8192, 00:27:32.118 "large_bufsize": 135168 00:27:32.118 } 00:27:32.118 } 00:27:32.118 ] 00:27:32.118 }, 00:27:32.118 { 00:27:32.118 "subsystem": "sock", 00:27:32.118 "config": [ 00:27:32.118 { 00:27:32.118 "method": "sock_set_default_impl", 00:27:32.118 "params": { 00:27:32.118 "impl_name": "posix" 00:27:32.118 } 00:27:32.118 }, 00:27:32.118 { 00:27:32.118 "method": "sock_impl_set_options", 00:27:32.118 "params": { 00:27:32.118 "impl_name": "ssl", 00:27:32.118 "recv_buf_size": 4096, 00:27:32.118 "send_buf_size": 4096, 00:27:32.118 "enable_recv_pipe": true, 00:27:32.118 "enable_quickack": false, 00:27:32.118 "enable_placement_id": 0, 00:27:32.118 "enable_zerocopy_send_server": true, 00:27:32.118 "enable_zerocopy_send_client": false, 00:27:32.118 "zerocopy_threshold": 0, 00:27:32.118 "tls_version": 0, 00:27:32.119 "enable_ktls": false 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "sock_impl_set_options", 00:27:32.119 "params": { 00:27:32.119 "impl_name": "posix", 00:27:32.119 "recv_buf_size": 2097152, 00:27:32.119 "send_buf_size": 2097152, 00:27:32.119 "enable_recv_pipe": true, 00:27:32.119 "enable_quickack": false, 00:27:32.119 "enable_placement_id": 0, 00:27:32.119 "enable_zerocopy_send_server": true, 00:27:32.119 "enable_zerocopy_send_client": false, 00:27:32.119 "zerocopy_threshold": 0, 00:27:32.119 "tls_version": 0, 00:27:32.119 "enable_ktls": false 00:27:32.119 } 00:27:32.119 } 00:27:32.119 ] 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "subsystem": "vmd", 00:27:32.119 "config": [] 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "subsystem": "accel", 00:27:32.119 "config": [ 00:27:32.119 { 00:27:32.119 "method": "accel_set_options", 00:27:32.119 "params": { 00:27:32.119 "small_cache_size": 128, 00:27:32.119 "large_cache_size": 16, 00:27:32.119 "task_count": 2048, 00:27:32.119 "sequence_count": 2048, 00:27:32.119 "buf_count": 2048 00:27:32.119 } 00:27:32.119 } 00:27:32.119 ] 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "subsystem": "bdev", 00:27:32.119 "config": [ 00:27:32.119 { 00:27:32.119 "method": "bdev_set_options", 00:27:32.119 "params": { 00:27:32.119 "bdev_io_pool_size": 65535, 00:27:32.119 "bdev_io_cache_size": 256, 00:27:32.119 "bdev_auto_examine": true, 00:27:32.119 "iobuf_small_cache_size": 128, 00:27:32.119 "iobuf_large_cache_size": 16 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_raid_set_options", 00:27:32.119 "params": { 00:27:32.119 "process_window_size_kb": 1024, 00:27:32.119 "process_max_bandwidth_mb_sec": 0 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_iscsi_set_options", 00:27:32.119 "params": { 00:27:32.119 "timeout_sec": 30 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_nvme_set_options", 00:27:32.119 "params": { 00:27:32.119 "action_on_timeout": "none", 00:27:32.119 "timeout_us": 0, 00:27:32.119 "timeout_admin_us": 0, 00:27:32.119 "keep_alive_timeout_ms": 10000, 00:27:32.119 "arbitration_burst": 0, 00:27:32.119 "low_priority_weight": 0, 00:27:32.119 "medium_priority_weight": 0, 00:27:32.119 "high_priority_weight": 0, 00:27:32.119 "nvme_adminq_poll_period_us": 10000, 00:27:32.119 "nvme_ioq_poll_period_us": 0, 00:27:32.119 "io_queue_requests": 512, 00:27:32.119 "delay_cmd_submit": true, 00:27:32.119 "transport_retry_count": 4, 00:27:32.119 "bdev_retry_count": 3, 00:27:32.119 "transport_ack_timeout": 0, 00:27:32.119 "ctrlr_loss_timeout_sec": 0, 00:27:32.119 "reconnect_delay_sec": 0, 00:27:32.119 "fast_io_fail_timeout_sec": 0, 00:27:32.119 "disable_auto_failback": false, 00:27:32.119 "generate_uuids": false, 00:27:32.119 "transport_tos": 0, 00:27:32.119 "nvme_error_stat": false, 00:27:32.119 "rdma_srq_size": 0, 00:27:32.119 "io_path_stat": false, 00:27:32.119 10:34:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:32.119 "allow_accel_sequence": false, 00:27:32.119 "rdma_max_cq_size": 0, 00:27:32.119 "rdma_cm_event_timeout_ms": 0, 00:27:32.119 "dhchap_digests": [ 00:27:32.119 "sha256", 00:27:32.119 "sha384", 00:27:32.119 "sha512" 00:27:32.119 ], 00:27:32.119 "dhchap_dhgroups": [ 00:27:32.119 "null", 00:27:32.119 "ffdhe2048", 00:27:32.119 "ffdhe3072", 00:27:32.119 "ffdhe4096", 00:27:32.119 "ffdhe6144", 00:27:32.119 "ffdhe8192" 00:27:32.119 ] 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_nvme_attach_controller", 00:27:32.119 "params": { 00:27:32.119 "name": "nvme0", 00:27:32.119 "trtype": "TCP", 00:27:32.119 "adrfam": "IPv4", 00:27:32.119 "traddr": "127.0.0.1", 00:27:32.119 "trsvcid": "4420", 00:27:32.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.119 "prchk_reftag": false, 00:27:32.119 "prchk_guard": false, 00:27:32.119 "ctrlr_loss_timeout_sec": 0, 00:27:32.119 "reconnect_delay_sec": 0, 00:27:32.119 "fast_io_fail_timeout_sec": 0, 00:27:32.119 "psk": "key0", 00:27:32.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.119 "hdgst": false, 00:27:32.119 "ddgst": false 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_nvme_set_hotplug", 00:27:32.119 "params": { 00:27:32.119 "period_us": 100000, 00:27:32.119 "enable": false 00:27:32.119 } 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "method": "bdev_wait_for_examine" 00:27:32.119 } 00:27:32.119 ] 00:27:32.119 }, 00:27:32.119 { 00:27:32.119 "subsystem": "nbd", 00:27:32.119 "config": [] 00:27:32.119 } 00:27:32.119 ] 00:27:32.119 }' 00:27:32.119 10:34:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.119 10:34:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.119 10:34:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.119 10:34:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.120 [2024-07-25 10:34:21.733206] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:27:32.120 [2024-07-25 10:34:21.733301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608070 ] 00:27:32.120 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.120 [2024-07-25 10:34:21.793557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.378 [2024-07-25 10:34:21.912255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.378 [2024-07-25 10:34:22.084507] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:33.312 10:34:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.312 10:34:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:33.312 10:34:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:33.312 10:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.312 10:34:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:33.312 10:34:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:33.312 10:34:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:33.312 10:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.312 10:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.312 10:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.312 10:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.312 10:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.570 10:34:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:33.570 10:34:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:33.570 10:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.570 10:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.570 10:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.570 10:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.570 10:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.827 10:34:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:33.827 10:34:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:33.827 10:34:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:33.827 10:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:34.086 10:34:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:34.086 10:34:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:34.086 10:34:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.d93zobe1Q1 /tmp/tmp.o3SpixC4Sl 00:27:34.086 10:34:23 keyring_file -- keyring/file.sh@20 -- # killprocess 1608070 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1608070 ']' 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1608070 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608070 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608070' 00:27:34.086 killing process with pid 1608070 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@969 -- # kill 1608070 00:27:34.086 Received shutdown signal, test time was about 1.000000 seconds 00:27:34.086 00:27:34.086 Latency(us) 00:27:34.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.086 =================================================================================================================== 00:27:34.086 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:34.086 10:34:23 keyring_file -- common/autotest_common.sh@974 -- # wait 1608070 00:27:34.346 10:34:24 keyring_file -- keyring/file.sh@21 -- # killprocess 1606908 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1606908 ']' 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1606908 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606908 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606908' 00:27:34.346 killing process with pid 1606908 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@969 -- # kill 1606908 00:27:34.346 [2024-07-25 10:34:24.024412] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:34.346 10:34:24 keyring_file -- common/autotest_common.sh@974 -- # wait 1606908 00:27:34.605 00:27:34.605 real 0m14.733s 00:27:34.605 user 0m37.395s 00:27:34.605 sys 0m3.123s 00:27:34.605 10:34:24 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.605 10:34:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:34.605 ************************************ 00:27:34.605 END TEST keyring_file 00:27:34.605 ************************************ 00:27:34.605 10:34:24 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:27:34.605 10:34:24 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:34.605 10:34:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:34.605 10:34:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.605 10:34:24 -- common/autotest_common.sh@10 -- # set +x 00:27:34.863 ************************************ 00:27:34.863 START TEST keyring_linux 00:27:34.863 ************************************ 00:27:34.863 10:34:24 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:34.863 * Looking for test storage... 00:27:34.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:34.863 10:34:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:34.863 10:34:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.863 10:34:24 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.863 10:34:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.863 10:34:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.864 10:34:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.864 10:34:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.864 10:34:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.864 10:34:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.864 10:34:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:34.864 10:34:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:34.864 /tmp/:spdk-test:key0 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:34.864 10:34:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:34.864 10:34:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:34.864 /tmp/:spdk-test:key1 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1608455 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:34.864 10:34:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1608455 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1608455 ']' 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:34.864 10:34:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:34.864 [2024-07-25 10:34:24.623355] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:27:34.864 [2024-07-25 10:34:24.623458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608455 ] 00:27:35.123 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.123 [2024-07-25 10:34:24.683055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.123 [2024-07-25 10:34:24.799819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:35.382 [2024-07-25 10:34:25.042094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.382 null0 00:27:35.382 [2024-07-25 10:34:25.074155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:35.382 [2024-07-25 10:34:25.074588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:35.382 90887738 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:35.382 620902807 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1608534 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1608534 /var/tmp/bperf.sock 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1608534 ']' 00:27:35.382 10:34:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.382 10:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:35.382 [2024-07-25 10:34:25.145787] Starting SPDK v24.09-pre git sha1 a4ac1b549 / DPDK 24.03.0 initialization... 00:27:35.382 [2024-07-25 10:34:25.145881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608534 ] 00:27:35.640 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.640 [2024-07-25 10:34:25.206316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.640 [2024-07-25 10:34:25.323281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.640 10:34:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.640 10:34:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:35.640 10:34:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:35.640 10:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:36.205 10:34:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:36.205 10:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:36.464 10:34:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:36.464 10:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:36.464 [2024-07-25 10:34:26.235868] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:36.722 nvme0n1 00:27:36.722 10:34:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:36.722 10:34:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:36.722 10:34:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:36.722 10:34:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:36.722 10:34:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:36.722 10:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.980 10:34:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:36.980 10:34:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:36.980 10:34:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:36.980 10:34:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:36.980 10:34:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.980 10:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.980 10:34:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@25 -- # sn=90887738 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 90887738 == \9\0\8\8\7\7\3\8 ]] 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 90887738 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:37.238 10:34:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.238 Running I/O for 1 seconds... 00:27:38.172 00:27:38.172 Latency(us) 00:27:38.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.172 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:38.172 nvme0n1 : 1.01 6937.58 27.10 0.00 0.00 18334.59 10145.94 29127.11 00:27:38.172 =================================================================================================================== 00:27:38.172 Total : 6937.58 27.10 0.00 0.00 18334.59 10145.94 29127.11 00:27:38.172 0 00:27:38.430 10:34:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:38.431 10:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.688 10:34:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:38.688 10:34:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:38.688 10:34:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:38.688 10:34:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:38.689 10:34:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.689 10:34:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:38.946 10:34:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:38.946 10:34:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:38.946 10:34:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:38.946 10:34:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.946 10:34:28 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:38.946 10:34:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:39.205 [2024-07-25 10:34:28.871986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:39.205 [2024-07-25 10:34:28.872382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf0d60 (107): Transport endpoint is not connected 00:27:39.205 [2024-07-25 10:34:28.873375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf0d60 (9): Bad file descriptor 00:27:39.205 [2024-07-25 10:34:28.874373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.205 [2024-07-25 10:34:28.874395] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:39.205 [2024-07-25 10:34:28.874418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.205 request: 00:27:39.205 { 00:27:39.205 "name": "nvme0", 00:27:39.205 "trtype": "tcp", 00:27:39.205 "traddr": "127.0.0.1", 00:27:39.205 "adrfam": "ipv4", 00:27:39.205 "trsvcid": "4420", 00:27:39.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.205 "prchk_reftag": false, 00:27:39.205 "prchk_guard": false, 00:27:39.205 "hdgst": false, 00:27:39.205 "ddgst": false, 00:27:39.205 "psk": ":spdk-test:key1", 00:27:39.205 "method": "bdev_nvme_attach_controller", 00:27:39.205 "req_id": 1 00:27:39.205 } 00:27:39.206 Got JSON-RPC error response 00:27:39.206 response: 00:27:39.206 { 00:27:39.206 "code": -5, 00:27:39.206 "message": "Input/output error" 00:27:39.206 } 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@33 -- # sn=90887738 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 90887738 00:27:39.206 1 links removed 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@33 -- # sn=620902807 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 620902807 00:27:39.206 1 links removed 00:27:39.206 10:34:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1608534 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1608534 ']' 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1608534 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608534 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608534' 00:27:39.206 killing process with pid 1608534 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 1608534 00:27:39.206 Received shutdown signal, test time was about 1.000000 seconds 00:27:39.206 00:27:39.206 Latency(us) 00:27:39.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.206 =================================================================================================================== 00:27:39.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.206 10:34:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 1608534 00:27:39.465 10:34:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1608455 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1608455 ']' 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1608455 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608455 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608455' 00:27:39.465 killing process with pid 1608455 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 1608455 00:27:39.465 10:34:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 1608455 00:27:39.724 00:27:39.724 real 0m5.076s 00:27:39.724 user 0m9.994s 00:27:39.724 sys 0m1.583s 00:27:39.724 10:34:29 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:39.724 10:34:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:39.724 ************************************ 00:27:39.724 END TEST keyring_linux 00:27:39.724 ************************************ 00:27:39.724 10:34:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:39.724 10:34:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:39.724 10:34:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:39.724 10:34:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:39.724 10:34:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:27:39.983 10:34:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:39.983 10:34:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:39.983 10:34:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:39.983 10:34:29 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:27:39.983 10:34:29 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:27:39.983 10:34:29 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:27:39.983 10:34:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.983 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:27:39.983 10:34:29 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:27:39.983 10:34:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:39.983 10:34:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:39.983 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:27:41.361 INFO: APP EXITING 00:27:41.361 INFO: killing all VMs 00:27:41.361 INFO: killing vhost app 00:27:41.361 WARN: no vhost pid file found 00:27:41.361 INFO: EXIT DONE 00:27:42.297 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:27:42.297 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:27:42.297 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:27:42.297 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:27:42.297 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:27:42.297 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:27:42.297 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:27:42.297 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:27:42.556 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:27:42.556 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:27:42.556 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:27:42.556 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:27:42.556 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:27:42.556 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:27:42.556 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:27:42.556 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:27:42.556 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:27:43.492 Cleaning 00:27:43.492 Removing: /var/run/dpdk/spdk0/config 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:43.492 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:43.492 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:43.492 Removing: /var/run/dpdk/spdk1/config 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:43.492 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:43.492 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:43.492 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:43.492 Removing: /var/run/dpdk/spdk2/config 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:43.492 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:43.492 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:43.492 Removing: /var/run/dpdk/spdk3/config 00:27:43.492 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:43.492 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:43.492 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:43.492 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:43.492 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:43.493 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:43.493 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:43.493 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:43.762 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:43.762 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:43.762 Removing: /var/run/dpdk/spdk4/config 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:43.762 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:43.762 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:43.762 Removing: /dev/shm/bdev_svc_trace.1 00:27:43.762 Removing: /dev/shm/nvmf_trace.0 00:27:43.762 Removing: /dev/shm/spdk_tgt_trace.pid1405496 00:27:43.762 Removing: /var/run/dpdk/spdk0 00:27:43.762 Removing: /var/run/dpdk/spdk1 00:27:43.762 Removing: /var/run/dpdk/spdk2 00:27:43.762 Removing: /var/run/dpdk/spdk3 00:27:43.762 Removing: /var/run/dpdk/spdk4 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1404276 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1404853 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1405496 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1405867 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1406400 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1406508 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1407055 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1407072 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1407285 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1408318 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409038 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409280 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409433 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409605 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409761 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1409898 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1410056 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1410249 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1410510 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1412537 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1412667 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1412806 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1412811 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413199 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413252 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413599 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413692 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413826 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1413845 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1414046 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1414290 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1414890 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1415096 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1415266 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1416897 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1418842 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1424335 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1424730 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1426592 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1426804 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1428759 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1431716 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1433394 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1438351 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1442424 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1443993 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1444501 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1452525 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1454206 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1473885 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1476956 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1480031 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1483039 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1483115 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1483535 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1484036 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1484532 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1484833 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1484847 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1485033 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1485141 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1485145 00:27:43.762 Removing: /var/run/dpdk/spdk_pid1485646 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1486062 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1486554 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1486858 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1486949 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1487053 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1487834 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1488403 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1492553 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1515282 00:27:44.025 Removing: /var/run/dpdk/spdk_pid1518116 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1519047 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1520049 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1520104 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1520171 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1520271 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1520609 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1521608 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1522170 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1522417 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1523742 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1523991 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1524419 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1526280 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1530886 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1532919 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1535812 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1536565 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1537439 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1539460 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1541344 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1545090 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1545092 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1547331 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1547429 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1547533 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1547740 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1547809 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1549882 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1550182 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1552180 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1553725 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1556403 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1559110 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1564201 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1567685 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1567690 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1578365 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1578696 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1579014 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1579418 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1579871 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1580270 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1580587 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1580896 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1582831 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1583024 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1585937 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1585991 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1587333 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1591210 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1591220 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1593380 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1594534 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1595609 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1596262 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1597926 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1598609 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1602715 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1602938 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1603233 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1604458 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1604762 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1605063 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1606908 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1606918 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1608070 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1608455 00:27:44.026 Removing: /var/run/dpdk/spdk_pid1608534 00:27:44.026 Clean 00:27:44.284 10:34:33 -- common/autotest_common.sh@1451 -- # return 0 00:27:44.284 10:34:33 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:27:44.284 10:34:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.284 10:34:33 -- common/autotest_common.sh@10 -- # set +x 00:27:44.284 10:34:33 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:27:44.284 10:34:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.284 10:34:33 -- common/autotest_common.sh@10 -- # set +x 00:27:44.284 10:34:33 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:44.284 10:34:33 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:44.284 10:34:33 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:44.284 10:34:33 -- spdk/autotest.sh@395 -- # hash lcov 00:27:44.284 10:34:33 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:44.284 10:34:33 -- spdk/autotest.sh@397 -- # hostname 00:27:44.284 10:34:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:44.544 geninfo: WARNING: invalid characters removed from testname! 00:28:16.606 10:35:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:16.606 10:35:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:19.919 10:35:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:22.456 10:35:12 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:25.735 10:35:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:28.261 10:35:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:31.544 10:35:20 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:31.544 10:35:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.544 10:35:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:31.544 10:35:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.544 10:35:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.544 10:35:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.544 10:35:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.544 10:35:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.544 10:35:20 -- paths/export.sh@5 -- $ export PATH 00:28:31.544 10:35:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.544 10:35:20 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:31.544 10:35:20 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:31.544 10:35:20 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721896520.XXXXXX 00:28:31.544 10:35:20 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721896520.r45e3e 00:28:31.544 10:35:20 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:31.544 10:35:20 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:31.544 10:35:20 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:31.544 10:35:20 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:31.544 10:35:20 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:31.544 10:35:20 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:31.544 10:35:20 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:31.544 10:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:28:31.544 10:35:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:31.544 10:35:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:31.544 10:35:21 -- pm/common@17 -- $ local monitor 00:28:31.544 10:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.544 10:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.544 10:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.544 10:35:21 -- pm/common@21 -- $ date +%s 00:28:31.544 10:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.544 10:35:21 -- pm/common@21 -- $ date +%s 00:28:31.544 10:35:21 -- pm/common@25 -- $ sleep 1 00:28:31.544 10:35:21 -- pm/common@21 -- $ date +%s 00:28:31.544 10:35:21 -- pm/common@21 -- $ date +%s 00:28:31.544 10:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721896521 00:28:31.544 10:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721896521 00:28:31.544 10:35:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721896521 00:28:31.544 10:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721896521 00:28:31.545 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721896521_collect-vmstat.pm.log 00:28:31.545 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721896521_collect-cpu-load.pm.log 00:28:31.545 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721896521_collect-cpu-temp.pm.log 00:28:31.545 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721896521_collect-bmc-pm.bmc.pm.log 00:28:32.486 10:35:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:32.486 10:35:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:28:32.486 10:35:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:32.486 10:35:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:32.486 10:35:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:32.486 10:35:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:32.486 10:35:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:32.486 10:35:22 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:32.486 10:35:22 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:32.486 10:35:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:32.486 10:35:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:32.486 10:35:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:32.486 10:35:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:32.486 10:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.486 10:35:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:32.486 10:35:22 -- pm/common@44 -- $ pid=1617099 00:28:32.486 10:35:22 -- pm/common@50 -- $ kill -TERM 1617099 00:28:32.486 10:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.486 10:35:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:32.486 10:35:22 -- pm/common@44 -- $ pid=1617101 00:28:32.486 10:35:22 -- pm/common@50 -- $ kill -TERM 1617101 00:28:32.486 10:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.486 10:35:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:32.486 10:35:22 -- pm/common@44 -- $ pid=1617103 00:28:32.486 10:35:22 -- pm/common@50 -- $ kill -TERM 1617103 00:28:32.486 10:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.486 10:35:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:32.486 10:35:22 -- pm/common@44 -- $ pid=1617131 00:28:32.486 10:35:22 -- pm/common@50 -- $ sudo -E kill -TERM 1617131 00:28:32.486 + [[ -n 1327779 ]] 00:28:32.486 + sudo kill 1327779 00:28:32.498 [Pipeline] } 00:28:32.516 [Pipeline] // stage 00:28:32.521 [Pipeline] } 00:28:32.539 [Pipeline] // timeout 00:28:32.545 [Pipeline] } 00:28:32.562 [Pipeline] // catchError 00:28:32.568 [Pipeline] } 00:28:32.585 [Pipeline] // wrap 00:28:32.592 [Pipeline] } 00:28:32.609 [Pipeline] // catchError 00:28:32.618 [Pipeline] stage 00:28:32.621 [Pipeline] { (Epilogue) 00:28:32.636 [Pipeline] catchError 00:28:32.638 [Pipeline] { 00:28:32.653 [Pipeline] echo 00:28:32.655 Cleanup processes 00:28:32.662 [Pipeline] sh 00:28:32.949 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:32.949 1617271 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:32.949 1617316 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:32.964 [Pipeline] sh 00:28:33.250 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:33.250 ++ grep -v 'sudo pgrep' 00:28:33.250 ++ awk '{print $1}' 00:28:33.250 + sudo kill -9 1617271 00:28:33.263 [Pipeline] sh 00:28:33.548 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:41.703 [Pipeline] sh 00:28:41.990 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:41.990 Artifacts sizes are good 00:28:42.004 [Pipeline] archiveArtifacts 00:28:42.011 Archiving artifacts 00:28:42.214 [Pipeline] sh 00:28:42.524 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:42.539 [Pipeline] cleanWs 00:28:42.550 [WS-CLEANUP] Deleting project workspace... 00:28:42.550 [WS-CLEANUP] Deferred wipeout is used... 00:28:42.558 [WS-CLEANUP] done 00:28:42.559 [Pipeline] } 00:28:42.580 [Pipeline] // catchError 00:28:42.593 [Pipeline] sh 00:28:42.875 + logger -p user.info -t JENKINS-CI 00:28:42.884 [Pipeline] } 00:28:42.901 [Pipeline] // stage 00:28:42.907 [Pipeline] } 00:28:42.925 [Pipeline] // node 00:28:42.931 [Pipeline] End of Pipeline 00:28:42.968 Finished: SUCCESS